Hi!
Already I read some old discussions about the content compare using hash, that FFS use bit-by-bit compare because it can't run on each location, but, as using the bit-by-bit that will transfer the WHOLE file in local to compare to the local one, why don't use the same way to add md5/sha1 etc? Coping in memory or in temp than create the hash only for remote path can solve.
My next question is to add a way to use more compare type, that can be usefull to create a database HASH, like if datetime+filesize are same don't recreate the remote hash but check only the local hash and compare with the one in database.
Just my 2 cents :)
Content compare and mixing compare
- Posts: 2
- Joined: 11 Jun 2023
- Posts: 2451
- Joined: 22 Aug 2012
Same arguments as already provided in the previous discussions on this topic:
In order to determine the hash of a remote file, FFS would need to download the remote file to the machine running FFS. Then it can better compare the entire left- and right-side file than first determining the hashes of the left- and right-side file and then comparing hashes.
Having FFS store hashes and comparing those stored hashes makes no sense, because if the file has changed, the stored hash is no longer the hash of the changed file.
Only if some service would be running on the remote machine that can provide FFS the momentary hash of the remote file, a checksum based approach could/would potentially be attractive.
But as this is normally not the case, and depends on the remote file server, FFS can not rely on that.
As per your further suggestion:
Only re-creating the hash of the remote file if datetime and/or file-size are different, defies the purpose of comparing by content, as it does not guarantee the actual data is (still) the same.
Then you may just as well simply compare by datetime+filesize and be done much quicker.
In order to determine the hash of a remote file, FFS would need to download the remote file to the machine running FFS. Then it can better compare the entire left- and right-side file than first determining the hashes of the left- and right-side file and then comparing hashes.
Having FFS store hashes and comparing those stored hashes makes no sense, because if the file has changed, the stored hash is no longer the hash of the changed file.
Only if some service would be running on the remote machine that can provide FFS the momentary hash of the remote file, a checksum based approach could/would potentially be attractive.
But as this is normally not the case, and depends on the remote file server, FFS can not rely on that.
As per your further suggestion:
Only re-creating the hash of the remote file if datetime and/or file-size are different, defies the purpose of comparing by content, as it does not guarantee the actual data is (still) the same.
Then you may just as well simply compare by datetime+filesize and be done much quicker.
- Posts: 2
- Joined: 11 Jun 2023
Ok but not always it's used for remote files, like I use it in my LAN between a raid and a nas, and a bit-to-bit comparsion it's the same to download the whole file to compare with another one