Hi,
I've seen the posts that say that the file comparison is on a bi by bit
level, but I was wondering whether the synchronisation is also on a bit
by bit level?
So if I have say a 40gb Word document and change only one character in it,
will the program copy the whole file over, or will it only copy that one small
change (like "true" rsync)?
Many thanks for your help
Is FreeFileSync true rsync synchonisation?
- Posts: 1
- Joined: 28 Sep 2010
- Site Admin
- Posts: 7279
- Joined: 9 Dec 2007
> will the program copy the whole file over, or will it only copy that one
small change (like "true" rsync)?
FFS doesn't have a client/server model that rsync requires for a "delta copy"
routine, so it will copy the complete file.
BTW a 40gb word doc should be a rare usecase ;)
small change (like "true" rsync)?
FFS doesn't have a client/server model that rsync requires for a "delta copy"
routine, so it will copy the complete file.
BTW a 40gb word doc should be a rare usecase ;)
- Posts: 3
- Joined: 10 Aug 2011
Any plans to implement a "client/server" switch that would allow for rsync /
delta type of transfers efficient over slow links? I regularly need to sync
PST files of 30Gb and even on a 100Mb link takes a lot of time, when only
100Kb or so has been changed.
As for the rest, the program is near perfect!
delta type of transfers efficient over slow links? I regularly need to sync
PST files of 30Gb and even on a 100Mb link takes a lot of time, when only
100Kb or so has been changed.
As for the rest, the program is near perfect!
- Posts: 3
- Joined: 10 Aug 2011
Suggestion: another alternative to client/server could be (crazy ideas but...)
to allow files above a certain limit (say 100Mb) to be "monitored" by way of
sequential hashes for each 1Mb that would be stored in a .FFS.db file in order
to find out which 1Mb segment had been changed and in that case would only
transfer that segment, instead of the while file. This would be really
amazing!
to allow files above a certain limit (say 100Mb) to be "monitored" by way of
sequential hashes for each 1Mb that would be stored in a .FFS.db file in order
to find out which 1Mb segment had been changed and in that case would only
transfer that segment, instead of the while file. This would be really
amazing!
- Posts: 3
- Joined: 10 Aug 2011
Sorry: and implementation of "restarting from break point" for syncing over
slow and unstable links would also be a must.
slow and unstable links would also be a must.
- Site Admin
- Posts: 7279
- Joined: 9 Dec 2007
> plans to implement rsync
Currently not, but technically it is possible to implement it after
introducing a file system abstraction layer, which FFS already "almost" has.
It could be yet another client just like zip-compression:
[404, Invalid URL: https://sourceforge.net/tracker/?func=detail&aid=3041854&group_id=234430&atid=1093083]
> Suggestion: another alternative
I don't think this is feasible. Technically you would need to pull enourmous
stunts to get at the required information, you'd need a file system filter and
this only works for local files. But all this doesn't help after a system
restart you have to pessimistically assume you have no idea about the
monitored file's internal state.
> implementation of "restarting from break point"
What do you mean by that?
Currently not, but technically it is possible to implement it after
introducing a file system abstraction layer, which FFS already "almost" has.
It could be yet another client just like zip-compression:
[404, Invalid URL: https://sourceforge.net/tracker/?func=detail&aid=3041854&group_id=234430&atid=1093083]
> Suggestion: another alternative
I don't think this is feasible. Technically you would need to pull enourmous
stunts to get at the required information, you'd need a file system filter and
this only works for local files. But all this doesn't help after a system
restart you have to pessimistically assume you have no idea about the
monitored file's internal state.
> implementation of "restarting from break point"
What do you mean by that?