I have a 24x7x365 "Home Server" system which acts as a file server and Plex media server. I am using 6 x 2Tb Hdds and DriveBender as a storage aggregator to give a single 8Tb storage pool. DriveBender is a cross between software raid and Windows native Storage Spaces.
Long story short due to an email change I missed the alerts of a failing hard drive until it was too late and the drive died before it could be safely replaced. However, all data such as Documents, Photos, Videos etc are backed up to OneDrive. (My Plex library movies, TV Shows, Music etc is backed up to another Plex Server). However, on attempting to restore my missing data (which was only partial as the data is spread across all internal disks) I have seen a flaw in my backup strategy.
Currently I am using SyncToy to back-up nightly any new or changed files to a directory on separate 2 Tb Hard drive. This directory is the OneDrive root folder so the contents are backed up to OneDrive.
Previously this appeared to work well and I am using the Echo option i.e.
- New and updated (renamed and deleted) files are copied LEFT to RIGHT only
- Use this option if you want to backup data files one-way, from your computer to an external drive
This means that any old files I no longer need and delete are also deleted from the backup which prevents the backup from steadily growing with old unwanted files. However, failure of the hard drive is equivalent of a deletion as far as SyncToy is concerned so it has deleted the lost files from the backup!
My best option is to change from Echo to Contribute i.e.
- New and updated (renamed) files are copied LEFT to RIGHT only. Deleted files are not mirrored.
- Use this option if you want to backup data files one-way, with the exception of deleted files as this would avoid the possibility of accidental deletion also deleting from the backup. The downside is that the space taken by the backup will continue to grow. I am nearly up to 940Gb of my 1Tb OneDrive allocation due to the number of home videos and family photos we have.
I have considered using another sync session that runs say once every month or even three months using Echo which would detect any "deliberate" deletions of files and remove these from the backup drive. I don't think this will be achievable with SyncToy as all folder pairs are run at the same time so am considering switching to FreeFileSync as I believe you can use batch scripts to run different backups at different times?
Any suggestions on a Backup Regime change using FreeFileSync to fulfil what I am trying to achieve would be appreciated. I actually used it to restore the missing data and it seems very comprehensive but I am new to the software including RealTimeSync.
Suggestions on amended Backup Strategy after Disk Failure
- Posts: 5
- Joined: 23 Jan 2022
- Posts: 4056
- Joined: 11 Jun 2019
Source corruption is always a concern. Unfortunately, FFS may have done the same thing. If it sees the source and that there is nothing in it, it will mirror those changes if told to do so. This is why air-gapped and offsite backups are also critically important. There should be a backup location that is only written to by manual means, whether you have to plug it in to back up or you only run a backup manually. If all your backups are automated and you aren't using versioning, you basically don't have a backup. OneDrive does have Recycle Bin luckily, so look there.
FFS is great, but I don't think it would have changed the outcome for you, as there is a fault in your backup strategy, not the tools you use.
FFS is great, but I don't think it would have changed the outcome for you, as there is a fault in your backup strategy, not the tools you use.
- Posts: 5
- Joined: 23 Jan 2022
Many thanks xCSxXenon for your response. You are of course absolutely right about more than one automated backup. If I think back to my previous employment we backed up local file servers over the network using Robocopy scripts to a central location and then backed up again to Tape vault, with the nightly tape being rotated and placed in the fire safe! What I one=dered though if I changed from Echo (Mirror in FFS Terminology) to Contribute (Update in FFS) then there would be no deletions of backup files where the source file is deleted. I could then regularly (and after your thoughts manually) run a seperate Mirror job which would highlight any deliberate deletions in source and allow me to apply or not those deletions in the backup data. this way I could keep the backup below the 1Tb limit. I am currently looking at software (MultCloud among others) that might allow backup to more than one Onedrive account so the backups are incremental on one Onedrive and full on the other. It's a shame FreefileSync doesn't currently backup to OneDrive only Amazon.
Also I like the functionality of FFS which identifies if a file has been moved from one directory to another. Currently we download all photos from our iPhones and tablets to a directory on the server using PhotoSync app. We then sort, rename and file those we want to keep under My Photos directory by which time the original file has been backed up. I believe FFS uses a file ID (at the file system level) to identify if the same file has been moved to a new directory. This would be useful in the scenario above and reduce duplication.
Also I like the functionality of FFS which identifies if a file has been moved from one directory to another. Currently we download all photos from our iPhones and tablets to a directory on the server using PhotoSync app. We then sort, rename and file those we want to keep under My Photos directory by which time the original file has been backed up. I believe FFS uses a file ID (at the file system level) to identify if the same file has been moved to a new directory. This would be useful in the scenario above and reduce duplication.
- Posts: 4056
- Joined: 11 Jun 2019
That's a good idea! automatic update syncs and then periodic manual mirror syncs, keeps the size in check still.