FileSync of the small files is taking too much time days and weeks, it would be nice of filesync support taking multiple small files at run time and compress them and transfer to the backup drive and unzip. This way i am sure it can save huge time.
Exactly like how HTTP support compression to reduce network bandwidth needs https://developer.mozilla.org/en-US/docs/Web/HTTP/Compression
Smart Accelerate with Runtime Compression
- Posts: 1
- Joined: 25 Apr 2020
- Attachments
-
- days.png (17.82 KiB) Viewed 1606 times
- Posts: 4056
- Joined: 11 Jun 2019
The compression methods you are referring to are optimized for text, thus very fast to compress and decompress. The amount of overhead that would arise from what you are suggesting would be atrocious. Not to mention, CPU and storage speed have a huge factor on (de)compression as well. Even though there is a point that it would be faster, it would be different for every hardware combination out there. How do you determine when to switch to that method when there is no concrete factors to come to that conclusion? You could do some tests. Use 7zip to compress a set of data, transfer the compressed files, then decompress them and record the total time from start of compressing to end of decompressing. Then, compare that to the time it takes to transfer just the uncompressed set of data. Do that for many different number of files and see where the lines cross.
- Posts: 4
- Joined: 22 Apr 2020
This is only available in the donation edition, but you could try the performance settings in order to improve those times:
https://freefilesync.org/manual.php?topic=performance
It is true though, FFS runs considerably slower when syncing many small files. It would be nice to have 2-4 threads available by default when syncing, and allow unlimited threads for those who donate.
https://freefilesync.org/manual.php?topic=performance
It is true though, FFS runs considerably slower when syncing many small files. It would be nice to have 2-4 threads available by default when syncing, and allow unlimited threads for those who donate.
- Posts: 2451
- Joined: 22 Aug 2012
Compression before transfer would need to be performed at the send/source-side and decompression after transfer at the receive/target-side, and as such would require software to run at both locations involved in the sync. If not, it only would make things slower.
The same holds for the occasional suggestion use of checksums.
However, installing software on a file server location is often not possible.
The advantage of FFS is, it only requires its software to run on a single computer/laptop/server, that may not even need to host any of the locations involved in the sync. It just requires the machine running FFS to have file-access to the locations involved in the sync.
The same holds for the occasional suggestion use of checksums.
However, installing software on a file server location is often not possible.
The advantage of FFS is, it only requires its software to run on a single computer/laptop/server, that may not even need to host any of the locations involved in the sync. It just requires the machine running FFS to have file-access to the locations involved in the sync.