More granularity on Mirror

Get help for specific problems
Posts: 3
Joined: 20 May 2015

david-sc

I just set up FFS to replace SyncToy and so far so good. I may be missing something, but I am having trouble achieving something that I need automatically and not manually. Here is my workflow:

I am archiving a large batch of slides and negatives (60,000) in high resolution, so I scan locally, my local hard drive is backed up to a fixed drive (Separately), and I have an 8TB NAS on the network where I am using FFS to copy files over. So far so good. I have a batch job set up and RealtimeSync is catching every new scan and moving it over. My issue is that I often go in and make changes to filenames after I make corrections or discover new information about an image, at this point FFS creates a new file on the NAS. I would like it to correct the existing one. I can accomplish this with mirror, but my issue with mirror comes down the road. I will start to run out of room on the PC I am using and will need to delete folders there (The 1TB on the PC is working space) at this point I do not want them deleted on the NAS, but mirror will delete them. Is there any way to just have FFS update the filenames, but ignore wholesale deletes?

Thanks
User avatar
Posts: 2288
Joined: 22 Aug 2012

Plerry

Instead of using "Mirror", you can use the "Update" synchronization variant.
Assuming an A => B Update sync,
* Any files existing on A and not existing on B will be copied from A to B
* If files exist on both A and B, and have the same date/time and size, nothing happens.
* If files exist on both A and B, and the one on A is newer, the file will be copied from A to B.
* If files exist on both A and B, and the one on B is newer, this poses a conflict
* Files existing on B and not (or no longer) existing on A, will remain on B.

The only problem might be when you rename files on A.
This might ultimately result in files being stored on B under both the old and the new name.

Be aware that for files existing on B but no longer on A you would need to provide adequate backup facility.
Posts: 3
Joined: 20 May 2015

david-sc

Thanks for the explanation Perry. This is where I have settled. Maybe I need to switch from the comparison being file time and size - which leads to a new file being created on B when I rename a file an A, to File content?

I am starting to think that I need to scan to a folder that is not being watched, get everything cleaned up, then move it into the watch folder to be transferred. I may also need to remove the sync on folders where I have completed scanning so they are not checked each time I make a change. Right now I have a folder that is being watched and I keep adding sub-folders and scanning into them. This is nice because it automatically makes copies of everything, but as the number of sub folders grows I think this will slow down.

so far I am finding if I either delete the copy (tedious) or change the name on the NAS, then make the identical change on my working hard drive, I end up with one file only, but that is also tedious and prone to error.
User avatar
Posts: 2288
Joined: 22 Aug 2012

Plerry

Comparison based on file-content rather than on file time/size will not help.
FFS compares on file-time/size or on file-content for identically named files,
and will not work for renamed or moved files, unless FFS recognizes files as such.
I don't know how FFS tries to recognize renamed or moved files,
but in my experience FFS mosttimes fails to do so.

Instead of using RealtimeSync (RTS) to launch an FFS-sync upon a change
in the monitored folder(s), you might also consider to launch FFS manually.
You could then do so only upon completion of making all changes.
This might avoid the need to make your modifications in a RTS non-monitored folder
and then move the modifications to an RTS moniored folder.
Posts: 22
Joined: 18 Dec 2009

grobbla

Another idea:
You could change the way you rename the files.
I.e. write a little batch script, that renames the local file, and at the same time the remote one.
Posts: 3
Joined: 20 May 2015

david-sc

grobbla - I am basically doing that manually, but rename the remote one first, or renaming the local one kicks off a transfer and creates a new file which then needs to be deleted. since the files are 130MB or so It is better if I can avoid transferring them. For example yesterday I changed 4 digits on 280 files I used bulk rename tool to change them on the NAS, then locally. much better than re-transferring all that data.
Posts: 22
Joined: 18 Dec 2009

grobbla

Yes sure, transferring the files again is not good.
If a bulk rename tool could be a temporary (in case mirror switches to db) solution for you, I would recommend "Rename Master.
You can set rules for renaming, then switch to the other path and use the same rules again. There is also a directory history, so you can switch forth and back easily.

But yes, would be nice if FFS could handle that.
Posts: 73
Joined: 13 Nov 2003

wm-sf

Instead of using "Mirror", you can use the "Update" synchronization variant.
Assuming an A => B Update sync,
* Any files existing on A and not existing on B will be copied from A to B
* If files exist on both A and B, and have the same date/time and size, nothing happens.
* If files exist on both A and B, and the one on A is newer, the file will be copied from A to B.
* If files exist on both A and B, and the one on B is newer, this poses a conflict
* Files existing on B and not (or no longer) existing on A, will remain on B.

The only problem might be when you rename files on A.
This might ultimately result in files being stored on B under both the old and the new name.

Be aware that for files existing on B but no longer on A you would need to provide adequate backup facility.plerry
Isn't this where "detect moved files" comes into play?