Detect renamed/moved directories

Get help for specific problems
Posts: 14
Joined: 11 Nov 2019

root

FreeFileSync often doesn't detect when a folder has been renamed or moved. Instead of renaming it at the other synched location accordingly, it suggests deleting it and copying it again (which takes much more time).

Please try to detect it automatically.

Or at least allow the user to tell FreeFileSync that the to-be-deleted and the to-be-copied versions of the folder are the same, and renaming is enough. For example by selecting the two versions -> right click -> "Detect overlaps" or so.

Thanks for the great app! :)
Posts: 14
Joined: 11 Nov 2019

root

Or allow right click -> "Rescan" specific folders, so that I can rename them such that they are named the same (and FreeFileSync takes that into account), without rescanning everything (which takes long).

The main issue is that I want to sync a few files that are inside big renamed folders. So just skipping both folders will miss those files.
User avatar
Posts: 3611
Joined: 11 Jun 2019

xCSxXenon

https://freefilesync.org/manual.php?topic=synchronization-settings
Detection is not supported on file systems that don't have (stable) file IDs. Most notably, certain file moves on FAT file systems cannot be detected. Also file accesses via SFTP do not support move detection. In these cases FreeFileSync will automatically fall back to "copy and delete".
Posts: 14
Joined: 11 Nov 2019

root

Why not consider files with identical size and timestamp as identical? That could be an option for the user to choose.
User avatar
Posts: 3611
Joined: 11 Jun 2019

xCSxXenon

Because files in different locations with identical size and timestamps are not always identical?
Increases data loss risk basically
Posts: 944
Joined: 8 May 2006

therube

I've renamed/moved a ton of files from SOURCE which then no longer coincides with BACKUP.

And I was thinking... that if..., oh I'm not quite sure what...

Well if the intent was to ensure that everything on SOURCE was backed up to BACKUP & named the same as on SOURCE...

If you first performed a size (or size/date) compare, then on those matches only, you checked for size (AND NOT same path/name) & on those matches only, you did a content compare, that would point out files that could be renamed/moved on BACKUP, without copying them over, again, from SOURCE.

The files exist in both locations, just the name &/or name & path differ.
Finding & renaming/moving rather then copying again, would be a neat feature, IMO.


(And of course, I can foresee gotchas too, like with dups on the SOURCE end.)
Posts: 14
Joined: 11 Nov 2019

root

Because files in different locations with identical size and timestamps are not always identical?
Increases data loss risk basically xCSxXenon, 12 Dec 2021, 18:02
Let the users decide whether they want to consider files with identical size and timestamp as identical. In many situations, users know that those files are identical, or not critical. Deleting and copying again is quite suboptimal in those cases.
User avatar
Posts: 3611
Joined: 11 Jun 2019

xCSxXenon

Let the users decide whether they want to consider files with identical size and timestamp as identical. In many situations, users know that those files are identical, or not critical. Deleting and copying again is quite suboptimal in those cases. root, 18 Dec 2021, 13:16
You over estimate the competency of users.
Posts: 14
Joined: 11 Nov 2019

root

You over estimate the competency of users. xCSxXenon, 19 Dec 2021, 17:48
You underestimate the competency of many users.

As usual in such situations, the software can say "if unsure, choose X". On the other hand, those who do know what they are doing should have the option.
Posts: 3
Joined: 27 Nov 2023

biosnod

Because files in different locations with identical size and timestamps are not always identical?
Increases data loss risk basically xCSxXenon, 12 Dec 2021, 18:02
I agree about the possible loss in other modes... But is there a risk of data loss in comparison mode "File content" + size + timestamp? I guess no =)
Please add this mode to allow rename/move detection before creating "sync.ffs_db", I don't mind to wait when FFS using this mode. This feature also allows to safely detect changes in file systems without stable file IDs.

Or please describe exactly what is the probability of “data loss” when calculating the hash sum for the content + size + timestamp? I think it is around 0%. This is really important thing for me to start first sync for different 2x 4TB HDD, at first time I can't sync it manually 100% and I don't want to do first sync without detection too (because it is difficult to understand where truly unique files appeared)
User avatar
Posts: 2288
Joined: 22 Aug 2012

Plerry

Using hashes has been suggested multiple times in this forum, and it has also been explained why using hashes does not make sense for FFS.
viewtopic.php?t=6709#p22256
viewtopic.php?t=6296&p=20695#p20695
Posts: 3
Joined: 27 Nov 2023

biosnod

I read this:
Using hashes would require software to run locally at each of the locations involved in the sync, which is often not possible.
FFS only needs to run on any single machine that can access each of the locations involved in the sync. Plerry, 28 Nov 2023, 08:04
If FFS already can use bit-by-bit comparison with remote dir - does this mean that it is still possible to calculate the hashes (local and remote both) using only one single FFS program on my PC?
User avatar
Posts: 2288
Joined: 22 Aug 2012

Plerry

Nope.
As said, FFS does not use hashes, as it makes no sense for FFS.
So, why would FFS then still waste time/effort in calculating hashes?
Posts: 3
Joined: 27 Nov 2023

biosnod

I know that FFS doesn't use hashes - that's why FFS can't detect changes before first sync without sync.ffs_db. But maybe implement hashes only for one feature - "first scan" and use only RAM to store hashes at first sync, after that just create sync.ffs_db in usual way...

I found FFS source code, in this case if FFS devs can't do it, guess I can add some libs, implement this feature by myself and build FFS with hash checking when first sync without changing logic of the original methods