FreeFileSync often doesn't detect when a folder has been renamed or moved. Instead of renaming it at the other synched location accordingly, it suggests deleting it and copying it again (which takes much more time).
Please try to detect it automatically.
Or at least allow the user to tell FreeFileSync that the to-be-deleted and the to-be-copied versions of the folder are the same, and renaming is enough. For example by selecting the two versions -> right click -> "Detect overlaps" or so.
Thanks for the great app! :)
Detect renamed/moved directories
- Posts: 14
- Joined: 11 Nov 2019
- Posts: 14
- Joined: 11 Nov 2019
Or allow right click -> "Rescan" specific folders, so that I can rename them such that they are named the same (and FreeFileSync takes that into account), without rescanning everything (which takes long).
The main issue is that I want to sync a few files that are inside big renamed folders. So just skipping both folders will miss those files.
The main issue is that I want to sync a few files that are inside big renamed folders. So just skipping both folders will miss those files.
- Posts: 4056
- Joined: 11 Jun 2019
https://freefilesync.org/manual.php?topic=synchronization-settings
Detection is not supported on file systems that don't have (stable) file IDs. Most notably, certain file moves on FAT file systems cannot be detected. Also file accesses via SFTP do not support move detection. In these cases FreeFileSync will automatically fall back to "copy and delete".
- Posts: 14
- Joined: 11 Nov 2019
Why not consider files with identical size and timestamp as identical? That could be an option for the user to choose.
- Posts: 4056
- Joined: 11 Jun 2019
Because files in different locations with identical size and timestamps are not always identical?
Increases data loss risk basically
Increases data loss risk basically
- Posts: 1037
- Joined: 8 May 2006
I've renamed/moved a ton of files from SOURCE which then no longer coincides with BACKUP.
And I was thinking... that if..., oh I'm not quite sure what...
Well if the intent was to ensure that everything on SOURCE was backed up to BACKUP & named the same as on SOURCE...
If you first performed a size (or size/date) compare, then on those matches only, you checked for size (AND NOT same path/name) & on those matches only, you did a content compare, that would point out files that could be renamed/moved on BACKUP, without copying them over, again, from SOURCE.
The files exist in both locations, just the name &/or name & path differ.
Finding & renaming/moving rather then copying again, would be a neat feature, IMO.
(And of course, I can foresee gotchas too, like with dups on the SOURCE end.)
And I was thinking... that if..., oh I'm not quite sure what...
Well if the intent was to ensure that everything on SOURCE was backed up to BACKUP & named the same as on SOURCE...
If you first performed a size (or size/date) compare, then on those matches only, you checked for size (AND NOT same path/name) & on those matches only, you did a content compare, that would point out files that could be renamed/moved on BACKUP, without copying them over, again, from SOURCE.
The files exist in both locations, just the name &/or name & path differ.
Finding & renaming/moving rather then copying again, would be a neat feature, IMO.
(And of course, I can foresee gotchas too, like with dups on the SOURCE end.)
- Posts: 14
- Joined: 11 Nov 2019
Let the users decide whether they want to consider files with identical size and timestamp as identical. In many situations, users know that those files are identical, or not critical. Deleting and copying again is quite suboptimal in those cases.Because files in different locations with identical size and timestamps are not always identical?
Increases data loss risk basically xCSxXenon, 12 Dec 2021, 18:02
- Posts: 4056
- Joined: 11 Jun 2019
You over estimate the competency of users.Let the users decide whether they want to consider files with identical size and timestamp as identical. In many situations, users know that those files are identical, or not critical. Deleting and copying again is quite suboptimal in those cases. root, 18 Dec 2021, 13:16
- Posts: 14
- Joined: 11 Nov 2019
You underestimate the competency of many users.You over estimate the competency of users. xCSxXenon, 19 Dec 2021, 17:48
As usual in such situations, the software can say "if unsure, choose X". On the other hand, those who do know what they are doing should have the option.
- Posts: 3
- Joined: 27 Nov 2023
I agree about the possible loss in other modes... But is there a risk of data loss in comparison mode "File content" + size + timestamp? I guess no =)Because files in different locations with identical size and timestamps are not always identical?
Increases data loss risk basically xCSxXenon, 12 Dec 2021, 18:02
Please add this mode to allow rename/move detection before creating "sync.ffs_db", I don't mind to wait when FFS using this mode. This feature also allows to safely detect changes in file systems without stable file IDs.
Or please describe exactly what is the probability of “data loss” when calculating the hash sum for the content + size + timestamp? I think it is around 0%. This is really important thing for me to start first sync for different 2x 4TB HDD, at first time I can't sync it manually 100% and I don't want to do first sync without detection too (because it is difficult to understand where truly unique files appeared)
- Posts: 2450
- Joined: 22 Aug 2012
Using hashes has been suggested multiple times in this forum, and it has also been explained why using hashes does not make sense for FFS.
viewtopic.php?t=6709#p22256
viewtopic.php?t=6296&p=20695#p20695
viewtopic.php?t=6709#p22256
viewtopic.php?t=6296&p=20695#p20695
- Posts: 3
- Joined: 27 Nov 2023
I read this:
If FFS already can use bit-by-bit comparison with remote dir - does this mean that it is still possible to calculate the hashes (local and remote both) using only one single FFS program on my PC?Using hashes would require software to run locally at each of the locations involved in the sync, which is often not possible.
FFS only needs to run on any single machine that can access each of the locations involved in the sync. Plerry, 28 Nov 2023, 08:04
- Posts: 2450
- Joined: 22 Aug 2012
Nope.
As said, FFS does not use hashes, as it makes no sense for FFS.
So, why would FFS then still waste time/effort in calculating hashes?
As said, FFS does not use hashes, as it makes no sense for FFS.
So, why would FFS then still waste time/effort in calculating hashes?
- Posts: 3
- Joined: 27 Nov 2023
I know that FFS doesn't use hashes - that's why FFS can't detect changes before first sync without sync.ffs_db. But maybe implement hashes only for one feature - "first scan" and use only RAM to store hashes at first sync, after that just create sync.ffs_db in usual way...
I found FFS source code, in this case if FFS devs can't do it, guess I can add some libs, implement this feature by myself and build FFS with hash checking when first sync without changing logic of the original methods
I found FFS source code, in this case if FFS devs can't do it, guess I can add some libs, implement this feature by myself and build FFS with hash checking when first sync without changing logic of the original methods