A detailed backup & recovery strategyRecovery & backup guide for photographersand videographers

Being a pretty detail-oriented photographer with quite a bit of experience, I get asked about our backup strategy with a regular frequency. At this point it’s elaborate enough that it’s worth talking about, so this article gets into some details.

As with any safety strategies, it’s important to first work out what the possible threats are that you want to protect from and only then develop plans to mitigate them. Security is costly, so it’s worth thinking about what particular risks you want to take into account, and which ones you can hopefully ignore and save time, money and effort.

For example, depending on where you live and work, you probably have different risks of a theft, flood or power surge, so you may want to give priority to just some of these risks.

Let’s start with a simple table overview, noting the risks and ways of mitigating against them. We’ll expand on those one by one later in the article.

Does this ⬇️
protect from this? ➡️
Single disk failureData corruptionFlood, fire, theftDeletion or overwriteVirusesPower surge
External drive (always on)nononopartiallynono
External drive (offline)nononopartiallyyesyes
SSD (internal or external)nononononono
External disk array (RAID)yesnonononono
Off-site physical backupnonoyesmaybeyesyes
Cloud sync or backupyesnoyessometimesnoyes
Cloud backup with extended historyyesprobablyyesyesyesyes
Local sync utilities/helps/helps//
Manual log-book/helps/helps//
Retaining data on memory cardsyesyesnopartiallyyesyes

Let’s explain each of these risks, and then move on to talking about each mitigation strategy in part two.

§1: Risks

Single hard drive failure

Mechanical hard drives have moving parts and at some point in time they will inevitably fail. It’s not a matter of if, but when, so it’s wise to be ready for it. Many factors go into it, so trying to forecast the drive failure is largely futile.

If you want to try, though, find a S.M.A.R.T. analysis utility and try to make sense of what the sensors in the hard drive are telling you. For example, if bad sectors count or hard drive spin up attempts metrics keep rising with time, replace your drive ASAP.

However, there are other things that can cause your drive to fail without prior notice (I’ve had brand new hard drives fail without warnings after a month, due to manufacturing defects), so if you want peace of mind it’s best to treat any hard drive as if it will fail tomorrow.

If your mechanical hard drive fails due to its motor or controller breaking down, it may be possible to fix it (at a high price) by professional data recovery services who would install a new motor or electronics from an identical healthy hard drive.

This is expensive and not guaranteed to work, so you’d probably use it only as a last resort.

SSD drives don’t have any moving parts and fail less often, but they still fail. Using an SSD is only protecting you from hard drive motor fail. If the controller, flash storage chip or some other part of disk electronics break, you will still lose your data.

Furthermore, SSDs have a limited number of writes to them. When you reach the limit they will either fail or go into read-only mode. This isn’t something that will happen soon (usually it takes hundreds of terabytes to kill an SSD), but your mileage will vary according to your actual usage.

There are utilities which will help you determine how much of usable life SSDs have remaining.

Mitigating against single disk failure is simple – have copies of your data on at least two different devices.

Data corruption

This happens rarely, but can be crippling.

There are a three general sources of corruption: data transfer corruption (data is getting corrupted only while being transferred from one device to another, but the original data is still intact), data storage corruption (the physical sectors or memory chips are corrupted, thereby the original data is also damaged) and data structure corruption (the file allocation tables on hard drive are corrupt, or in case of RAID systems, the RAID structure falls apart).

Depending on where the corruption happens, you have different options of protecting against it.

Memory card corruptions are the worst, as they affect the original data.

There is little chance of getting clean data off a corrupted card (although some data can often be saved by professionals), so my first advice is to test your memory cards periodically. There are utilities that can help you check the state of your cards by writing them full of data and reading the data back to check for any changes (i.e. corruption).

I test each new card the moment I buy it, before it ever touches my camera.

In the span of my career, I caught 3 bad brand-new SD memory cards from a renowned manufacturer. I sent them back with testing report and got new cards without issues. However, had I not tested them in advance, I would have gotten corrupt photos and footage.

Most photographers will use dual-card setup in their cameras to mitigate against this. However, when you come home, you will usually download just one card and presume the data is okay. And this will most often be true, until one day it isn’t.

Unexpected memory card corruptions are difficult as they can arise with little warning. They often start at a random part of memory card and slowly spread affecting more and more memory chips.

One solution is checking a random sample of photos after downloading them, to see if at least some photos are fine. If they are, there could possibly still be a few corrupt photos scattered around, but at least you’ll know that large number of photos isn’t affected.

Another solution is downloading both (identical) cards separately. If one turns out to have corruption, the second will provide clean data. This is a bit tedious done on regular basis, though.

My personal approach is a compromise between the two. I download just one set of cards, but then don’t format them until I’ve had the chance to go through at least some photos. If I notice any issues, I still have the second set of backup cards.

If I can’t afford it time-wise (e.g. because next shoot is just around the corner), I will make the effort of copying both cards from the set into separate folders, and keep two copies of data until I’ve had time to go through them.

Note: to properly check your files, you need to see raw files decoded, e.g. open them in develop module in Lightroom. Otherwise, you may be looking at the smaller JPEG preview (like Photo Mechanic, Aftershoot or Narrative Select would show) which are less prone to corruption because they are much, much smaller than the raw data.

This also brings us to a possible solution if your data happens to get corrupted. Many professional cameras embed full-size JPEGs (at lower quality) to serve as quick previews of your raw files. These are the files you’re looking at when reviewing shots on your cameras and many apps for culling will use these, as they can be displayed quickly (raw files take time to decode).

These previews are much smaller than raw files, so if your data gets randomly corrupted in transfer, there are much smaller chances of these previews getting corrupted than the full raw files (think of it like Russian roulette). So, if corruption happens, you can try extracting these previews and post-processing them as a last resort.

You can batch-extract these preview with tools like exiftool, just prepare for a learning curve and some command line tinkering.

Let’s now touch upon specific types of corruption.

We had data transfer corruption happen to us once due to a bad USB controller – we ingested a few memory cards and the photos got randomly corrupted in transfer as they went over computer’s USB interface. You can also have a bad contact on one end of the cable, a bad adapter, bad RAM chips (working memory which temporarily stores the data) or even a corrupt memory card.

Protecting against this is best done by using a utility which supports data verification. Simply put, it will copy all the data, and then do it again comparing against the first copy on the destination. If everything matches, the data has been copied without errors. Downsides are that you need a special utility and it takes double the amount of time it normally would.

We discussed data storage corruption in context of memory cards, probably the worst kind as you lose your original data right at the source. However, the same can happen on a mechanical hard drive (bad sectors on spinning magnetic plates of the disk) or on an SSD (corrupt memory chips).

If this happens, your best bet is to find a healthy backup. Recovery from bad sectors/flash chips can be performed by professional data recovery services, but will often turn up just a partial dataset and files may be corrupt anyway.

Finally, data structure corruption happens when the data is intact, but the pointers leading to that data get corrupted (imagine it like a book with messed up table of contents). It means your computer will be unable to locate the data when you try to access it.

For ordinary hard drives, this can usually be fixed with built-in macOS or Windows utilities. If you have a RAID system, you probably won’t be as lucky. These corruptions often mean data is lost for ever, due to the way RAID controllers spread chunks of data to multiple hard drives for redundancy. If they ‘forget’ where each chunk is, it’s as good as gone.

Flood, fire, theft

This is a group of adverse local influences which will tend to apply to all equipment you have at one physical location. Generally, no amount of local backups and copies will help against these, with a few exceptions probably not worth relying on (some SSDs and many memory cards are water resistant… but that’s about it).

Mitigation is simple – find a way to have a copy of important data off-side. It can be a backup hard drive kept at your parents’ or friend’s place, or a cloud backup. The trick here is finding a sweet spot between price and practicality.

Call me lazy, but I don’t have an off-site hard drive because it would be too tedious to regularly update it with new data. In practice, this would mean the copy of data on that drive is old and would be of little use in case I needed it.

Our preferred solution is automated cloud backup. Added bonus is that we can access our data anytime from anywhere, which isn’t possible with an off-side hard drive. The drawbacks are monthly payments for the service and longer backup times (depending on your internet connection speed).

Accidental deletion or overwrite

Basically, user error. It happens. We’d like to think it doesn’t, but at some point in time it usually will.

Deletion by user is one aspect. If you’re lucky, your data will be in your recycle bin. If you’re unlucky, the bin will be too small to fit your data, or you will have emptied it before noticing the error, or it will happen on storage media that doesn’t support the recycle bin functionality — like on a memory card or USB drive in Windows (macOS supports bin functionality on most removable media).

If the bin is already empty, depending on the storage medium, you can try resorting to data recovery software. This specialized software will scan your whole drive for remnants of useful data.

As recovery can take a long time with no guarantees, it’s good to know when data recovery is worth the effort, so let’s talk about that for a moment.

Generally, storage devices don’t actually destroy the data immediately upon deletion. They only mark it as deleted and available for new data. The deleted data will only get physically overwritten at some point in the future. When exactly this happens, depends on multiple factors like amount of free space available, how much new data is being written and the type of storage in question.

On mechanical hard drives, the deleted data will usually live for as long as you don’t write any new data.
This includes your computer which can write stuff in the background (like swap, temporary and cache files), which is why it’s critical to take the drive offline as soon as you realize you’ll need to perform data recovery. Mounting it back on is best done in read-only mode, to preserve what’s left of the deleted data.

On memory cards and USB drives (flash drives), the data will also live for as long as you don’t record any new data. Your computer’s operating system usually won’t write any data by itself (as it would to a hard-drive), so your deleted data should be safe until you’re ready for recovery.

In both cases, if the deletion is fresh you will usually be able to recover a large percentage of your data.

However, SSDs are a different story.

They have memory management built into their firmware (software helping to keep them healthy and speedy for longer), which means that deleted data will soon get completely destroyed even without you doing anything. The very fact the disk is powered on will allow it to perform maintenance (like trim and wear-leveling functions).

If your data gets deleted off an SSD, there is little hope of recovery. Your best bet is turning the disk off as soon as you realize the mistake and finding professional help… but don’t get your hopes up.

Finally, RAID disk arrays (like NAS or DAS systems) have their own kind of memory management which precludes you from using data recovery on them. The data is subdivided into chunks and scattered around multiple disks for redundancy, so it would take a forensic team to piece it back together.
If you delete something on a RAID array and empty the recycle bin, it’s gone for all practical purposes.

Overwriting the data is somewhat of a different beast.

Example: let’s say you have a folder of photos which you culled and processed, and due to user mistake you overwrite it with the folder of original photos, which were neither culled nor processed.

This is not unheard of: if you’re juggling several computers and hard disks at the same time, you have solid chances of this happening sometime in your career.

My personal solution is keeping a manual log of what was done, where and when. Before any backups, I always consult the log to check my assumptions and I update it as I backup. (More on this near the end of the article.)

Another great help is not using the usual copy functionality on Windows/macOS, but a specialized syncing tool. My personal favorite is FreeFileSync in one-way mirror mode.

Basically, it will only copy new or updated files between destinations and provide a list of changes before it does anything.

It’s a great sanity-check before doing anything important, and even better, it won’t introduce additional risk of re-writing the raw originals if you only need to update your metadata (like culling or processing information); it will only copy the updated files (e.g. XMPs).

So, if you’ve had any corruption later in the process due to whatever reason, your original raw files will remain safely in the backup destination, not getting overwritten as they would if you just copied the whole folder again.

Best of all, syncing just the changes this way is much faster than doing full copies.

Ransomware and viruses

Even with best security practices like strong passwords, updated software and avoiding any suspicious links or sites, our operating systems have (and always will have) bugs and security holes. Threat actors will sometimes find a way to break in using these, and then, despite all your efforts, you may find yourselves looking at a ransomware note and gigabytes of locked data.

Ransomware viruses will make all your data unusable via strong encryption, so you’ll either have to pay for possible (but not guaranteed) decryption of your data, or just delete everything and restore from a backup.

The best mitigation is keeping a copy of the data somewhere ransomware can’t physically access, as they automatically spread over networks and to external drives.

It can be an offline drive that you rarely plug into your computer, which hopefully means that when you do get infected by ransomware, you will learn of its presence before you plug in your external drive for regular backups. (If you have the bad luck of plugging it in while the virus is silently at work, you will lose the data on it as well as most ransomwares try to encrypt external and network hard drives as well).

However, if you have a drive that you plug in once a week just for a quick differential backup, you stand good chances of getting away with it in case of a ransomware attack.

The second thing that can help is some kind of cloud backup or sync system that offers extended history.

If your data gets encrypted, it will get automatically backed up to the cloud, overwriting your clean data. However, extended history means that the cloud system is preserving your old file versions for some time (e.g. 12 months).

These copies aren’t available directly from your computer, so ransomware can’t encrypt them. They provide an excellent mitigation strategy, albeit a bit tedious one (if you want to recover many thousands of files this way, it could take a lot of time… but at least you have an option!).

Power surge

Let’s briefly touch upon this as well. Depending on where you live and whether you have underground power cables coming to your house, lightning storms can cause power surges which can damage or destroy equipment. If your computer and hard drive are plugged in when this happens, you could lose data.

Your best bet is something that’s not connected to power (like an offline external hard drive) or an off-site backup.

Emphasis here is on equipment not being physically connected via a cable to a power outlet or other devices connected to power (e.g. USB cable connecting a hard drive to your computer). Lightning can fry even powered off devices if they’re still plugged in!

backup strategy for photographers

§2: Mitigation strategies

I’ve given a lot of hints on mitigation strategies by now, but let’s analyze them one by one in the following section. Feel free to skip parts you’re already familiar with.

External hard drive (always connected)

A drive which is always connected and available protects from very little. It mostly creates another copy of the data, so if your main drive fails, you have a copy.

It won’t protect from most other risks listed in our table for obvious reasons – external drives will occasionally fail, they can get stolen or physically damaged along with all your other on-site equipment, they can get encrypted by ransomware or a nearby lightning can fry them.

It doesn’t mean they’re a bad idea, however; it’s the exact opposite! They are an affordable and important piece in your backup strategy arsenal, just not the only piece.

External drive (mostly kept offline)

One step better is an external drive that you keep offline and connect only periodically for synchronizing new data.

If it’s offline and unplugged from power, it can’t get attacked by ransomware, you can’t accidentally delete something off it and a lightning strike shouldn’t harm it. Its mechanical life will also be extended since it will rarely be powered on.

SSDs

SSDs are fast but relatively expensive (looking at capacity vs. price), so most people will use them as working drives instead of backups.

For the most part they share the advantages and drawbacks of external mechanical drives, with the exception that they won’t fail mechanically, although they still can (and will) fail electronically – someday.

As explained above, recovery of deleted data off SSDs has very slim chances of succeeding due to specifics of how they work.

External disk array (RAID)

Photographers just love buying a RAID array, filling it with hard drives and thinking that’s that – the complete and foolproof backup strategy.

However, it’s important to realize that compared to simple external hard drives, they offer just a bit more protection. In particular, have only one advantage with two drawbacks.

Advantage is that they do a great job of protecting you from a single hard drive failure (or even from two simultaneous drive failures, if you configured them in dual-drive redundancy mode).

If the drive fails, you pull it out a put a fresh drive in. RAID field will get rebuilt from the internal redundant copy. Great!

First drawback is that due to how your data is distributed across multiple drives, recovering your data via specialized utilities isn’t possible like on regular hard drives. If you accidentally delete something, RAID array management will delete the pointers to data scattered around multiple disks and no one will be able to piece it back together again.

Second drawback is that a RAID array itself can get corrupted and lose all pointers to your data, effectively making the data inaccessible for good. It’s rare, but there a few horror stories of this type across the internet. (On a classis hard disk, this file allocation table can often be rebuilt.)

The takeaway is this – a RAID array is a nice tool, but it solves just one very specific problem. My advice is to complement it with an offline external hard disk for additional protection!

Retaining data on original memory cards

This is a great strategy, almost foolproof, if you can afford enough memory cards to keep them unformatted for long enough time for it to make a difference.

It will mitigate against quite a number of issues, like a drive failure, accidental deletion, ransomware and even water damage, and uniquely – against corruption during data transfer.

As we use expensive CFExpress and XQD cards, we usually practice the alternative to keeping the data on the cards — verifying data immediately after copying and creating multiple backups off that.
But, when shooting few weddings in close proximity and with little time to check our data after copying, we will either leave one of the card pair intact for redundancy purposes, or simply backup both cards from the redundant pair to different computers.

Off-site physical backup

If some physical adversity happens to your studio, it’s possible that all data stored within will be gone, whether it’s theft or a fire. Having an off-site copy of important data is imperative in this case.

On the other hand, a physical drive at a different location is a bit tedious to update and keep fresh because you have to keep bringing it in the studio and then removing it as soon as possible (so it’s actually off-site).

To make this easier, some will opt to have two drives in rotation. One is at the studio, getting updated with fresh data and the second one is somewhere in a different location. Then they simply switch the two drives regularly (e.g. weekly or monthly).

Of course, this strategy is only as good as your discipline to regularly update the off-site copy. If you get lazy, it won’t do much good as you’ll only have an old copy of the data.

Cloud sync or backup

This is a great way to create an off-site backup that’s readily accessible (as opposed to a hard drive in your friend’s drawer which is the very definition of not accessible).

Most of the backup is done automatically, so it’s simpler (and much fresher) than the said drive kept at someone’s place, but with an added monthly price-tag.

The advantage is that the data is always fresh and the service makes sure to have redundant copies of your data in the cloud. This is an added benefit compared to a physical drive kept off-site which will also fail at some unknown point in the future.

The downside is that the cheapest version of cloud-backup will automatically sync any problems you create locally. If your data gets corrupted, encrypted, overwritten or deleted, all of this will soon be replicated in the cloud as well.

Sometimes you will get a small window of time (e.g. 30 days) where previous versions are retained and accessible, but this depends on your service and plan.

Cloud sync or backup with extended history

At an additional price, extended history is a great approach to protecting against unwanted changes to your data (like ransomware, deletion or corruption).

The cloud service will keep previous versions of your files, including deleted files, for an extended period (like 12 or 24 months). This gives you plenty of time to notice something is wrong and plenty of opportunity for recovery.

Local syncing utilities

Okay, now we’re coming into the more esoterical territory.

Sync utilities will help in synchronizing changes across multiple folders and backups, without actually copying all the data each time. They can work in real-time, on a pre-set schedule or be run manually.

For example, when you shoot a wedding and back it up to 3 destinations, you will usually use one of those copies as a working copy for culling and processing. Once you’re done, you will likely want to sync those changes to your backup copies.

The simple way of doing this is just overwriting the backup folders with the updated/processed copy. However, this takes more time since you’re copying all the images all over again, and also introduces some tiny (but non-zero) risks, like possible corruption during copying or simply doing the sync in the wrong direction (e.g. overwriting the processed copy of the photos with the original unprocessed folder).

File sync utility will help mitigate against these two issues.

Before we continue, a quick explainer: raw files (with the exception of DNG files) will not change when culled or processed. They just get XMP side-car files containing your ratings and edits. To backup your edits, it’s enough to copy just those XMPs along with any other new files* and merge them with the original folder of raw images in your backup destinations.

(* New files like PSDs and TIFFs created during Photoshop edits, DNGs created after merging panoramas/HDRs or DNGs resulting from advanced denoising processes. Also, any rated/processed DNGs will need to be copied again as they contain the edits inside of them.)

File sync utilities will only sync differences between folders.

This is very quick since most of these new files are tiny. It’s also a good idea not to copy the raw files again if it isn’t absolutely necessary, not only for the slowness but also for avoiding possible data corruption during transfer.

Some of these utilities will produce a file list of differences before the sync, which is great for a quick check of what will get copied, updated or deleted.

For example, if you make a mistake and chose the wrong sync direction between a raw folder and a processed folder, such a utility will indicate that you will perform a deletion of a few hundreds (or thousands) of XMP files, as this would be necessary to make the folders identical.

Even a quick glance will make you realize this isn’t what you want! You’re expecting to copy new XMP files from your processed folder to the raw files folder, so you’d quickly catch this mistake and change the sync direction before any damage is done.

Manually updated log-book

Log-book is our own original contribution to this whole process. Let’s start with ‘why’, and explain ‘how’ in the second part.

As Marina and I are working together, we have a few computers and many active hard drives in the mix at all times. We both do various things with various shoots: e.g. maybe Marina will cull something and pass it on to me for further processing.

We juggle multiple hard drives and absolutely must keep track of what data is where, who has done what with it and at what point in time, along with where the backups are and whether they’re up to date or not.

It’s very easy to get both overconfident and forgetful, especially in the heat of the wedding season where terabytes can flow in and out of our studio each month.

Our solution is relatively simple: a shared log-book (in Evernote) detailing the wedding, the date, the location (on our drives) and what was done with it.

It’s easy to get into practice of writing into it, updating it and consulting it any time you copy new data or edit something. It may seem like an overkill, but it has helped us clear up stuff (and avert catastrophes) many times in the past.

At this point we’ve been using it close to a decade and don’t intend to stop!

The idea is that we always know which drive is active for a certain wedding (while it’s getting processed), which weddings were backed up, which drive contains the original data and where the copies are, what remains to be backed up (especially useful for offline drives where you can’t check manually without powering them on), what drives have partially deleted weddings (we delete some photos after delivery) etc.

Before we show a demo of the table we use, let’s talk about its origin story for a moment, as some details will make much more sense.

The thing that triggered it all was a wedding corrupted in the transfer process. It created a confusion because we had 3 backups of that wedding, without knowing which one was copied from where and whether there was potentially one copy unaffected by corruption.

We had hard time establishing where and why the corruption occurred, so we didn’t know how to fix it for the next wedding. It was also difficult to reconstruct the path original data took from being okay to getting corrupted, so it was very slow to check all the photos in all the backups and see what’s what.

So, from the very start, the idea of data origin was built into the table to help us retrace our steps in case of need.

Let’s now look at an example of this table we use:

recovery and backup strategy, manual log-book

First column indicates the date a change occurred. It can be a date when the photos were first copied from the memory cards, or when we started culling or editing.

Once the photos have been fully backed up to all systems we deem necessary, the field gets a green background. It’s a visual indicator of “you’re done with this row”.

A wedding will get one row when first downloaded, second when getting culled, often a third for editing etc. If it gets fully backed up and then changed afterwards, it will get another row for any new change.

If we spend a long time working on a wedding, it will have more rows as its path will be more complex and it will get multiple backups. For quick one-day edits often one row will suffice.

Second column is the name of the couple, the key differentiator between shoots.

Third column describes what was done with the wedding, e.g. copied off the cards, culled, edited, created and sorted into a LR collection, exported for delivery, deleted 0-star images etc.

Fourth column names the original destination of the photos, i.e. the drive or computer we used to perform that action. If I copied the images off cards directly to my working SSD, I will note so in this column. This is a rarely needed piece of information, but can come in handy if you need to investigate what occurred in the past.

Next set of columns is a list of our most used hard-drives and computers. The one where the work was originally performed will get an ‘O’ (for origin), and after the data or changes are replicated to other drives or computers, they will get an ‘X’ in their columns.

As mentioned, if a wedding is fully backed up, first column gets a green shade. Combined with all the ‘O’s and ‘X’es in the table, they offer a quick visual indicator of images’ status at a glance.

We can also quickly tell what still needs to be backed up to our offline drives, which are rarely powered on.

§3: How it all comes together:
A case study of our approach

Okay, so let’s talk specifics.

When coming home from a wedding, we will copy the images to one destination, usually an external working SSD.

Our Evernote table gets an entry for this wedding while the copy is still in progress.

Unless we have another wedding in the next few days, we will leave images on cards for the time being, thereby not creating a necessity for backing up the images to multiple destinations immediately.

We prefer to use this time to give the images a quick look and maybe start working on a preview.

Keeping the images on cards is also useful for not having to power on our offline drives immediately – it’s a good idea to connect them to your computer as rarely as possible. As long as the master images are on both cards, we’re comfortable having only one working copy to speed things up.

As the time progresses, usually sooner rather than later, we will create another two copies of the data and update our log accordingly. The next destination will usually be our RAID array which we mostly keep offline, along with a simple large offline USB hard drive.

The RAID array protects us from a single failed hard drive. During the years, I think we’ve had all five drives fail at some time so it’s not just a theoretical problem.

It features more than 40TB of drives which means it can fit anything for any length of time. It represents the master copy of all our data – everything is on it.

The additional external USB hard drive (14TB unit at the moment) protects us from RAID array corruption and gives one copy of all the recent data that’s easy to carry with us (e.g. for a longer trip) and relatively quick to access (hard disk needs just seconds to boot up, compared to minutes for RAID array).

Once this disk is full, it’s moved to our offline archive and replaced with a fresh, empty drive. (All these disks taken together are exact mirror of everything on the RAID array.)

Since both these devices are offline for most of the time, they protect us from ransomware to some degree as well. To aid with this, we try not to connect them to the computer at the same time, since it’s theoretically possible that a ransomware will at some point be active and encrypting data, but still silent and unnoticed, thereby spreading to the connected drive as well.

Since they’re mostly offline, they should have extended mechanical life-spans as well.

After initial copying of the data to multiple drives, we only use sync utilities for updating the copies between drives, for reasons described before. Our choice is the amazing free utility FreeFileSync.

To complete the whole backup system, we use two cloud-based services.

Dropbox, with its limited storage space, is useful for syncing and safekeeping a limited amount of data.

We use it to store delivered JPEG files and edited videos. We store them as ‘online-only’ copies and pay for extended version history. This way our local drives aren’t getting filled up with data we will rarely (if ever) need.

If everything blows up in our studio, we will still have easy access to anything we ever created and delivered, from anywhere in the world (albeit in a non-editable form). If we get hit with ransomware, extended version history will have our original, unencrypted data preserved.

The second service we use is Backblaze. For a very modest monthly fee, it will automatically backup unlimited amount of your data (but not your software or the OS!), both from your computer and any external drives you choose.

We can’t recommend it enough and have only love for it!

We use it to backup data off our main computer along with the complete RAID array where we keep everything in one place. We also pay for the extended version history and use a strong private encryption key which makes it impossible for anyone to look inside our data on the servers should they ever get hacked.

If anything should ever happen with any of our local copies, we have everything safely and privately stored in the cloud. The best part is that if we ever actually need those multiple terabytes of data, we can ask Backblaze to send us the data on large hard drives by mail, instead of spending days (or weeks) downloading it manually.

Finally, we didn’t mention the backup of the computer itself – operating system and software.

Generally, everything can be reinstalled from scratch if necessary, so it’s no big loss… but that takes time!

In the peak of wedding season, we don’t want to find ourselves reinstalling macOS and all the software, spending inordinate amounts of time reconfiguring and restoring settings and presets. (It’s enough that one macOS update goes wrong and… boom!)

This is why we have a final layer of backup using Carbon Copy Cloner which is set up to just backup system files and software (it actively ignores wedding photos and videos, since we do that manually and much more frequently).

It’s quick since it does just differential backups and we use it each time macOS has an update or Adobe publishes new major versions of its software.

If anything goes wrong during an update, be it macOS itself or just an important plugin in Adobe software that suddenly stopped working, we have a way of restoring the whole system to a previous working state. I don’t think we ever needed that, but for the price of an additional 2TB hard drive and Carbon Copy Cloner license, we love the added peace of mind.

And that’s it! Simple as that. 😂


We hope you found our recovery & backup guide for wedding photographers and videographers useful and that we’ve inspired you to up your backup game and have more peace of mind in the future.

Thanks for reading!