Zenju, thanks for your work to date on this project and your phenomenally fast responses to bug submissions. I think this is a great piece of software and will only get better with your enthusiasm.
However, for my company at the moment, we are unable to use it because during synchronisation the structures are held in memory. For a disk with over 22 million files and only 4GB of RAM, this uses all of the memory and crashes before we are 20% done. I was going to raise an enhancement request to perhaps use temp files for some of the structures, but I think our situation is probably a special case and most people won’t hit this issue.
It was suggested that I break the synch down into a number of smaller synchs. Unfortunately, this results in DB files being created in sub-folders on both the source and destination disks. The files on the source disk actually interfere with our own applications which are monitoring those folders (DB files in the root folder are fine).
Despite the fact that we are unable to use this currently (& that it is actually my employer & not me trying to use the software) I was so impressed that I was happy to make a small donation.
Keep up the good work:-)
Thank You
- Posts: 2
- Joined: 31 Mar 2009
- Site Admin
- Posts: 7210
- Joined: 9 Dec 2007
Thanks Keith for the support!
You can solve the problem with FFS creating multiple database files by splitting down the task in a different way: Don't use the sub folders for synchronization, but the base folders, just as you would have done in first place. Then enter filter settings to only sync a sub-set. The include/exclude filter settings are already applied during directory traversal and will therefore limit the required memory, but only a single database file will be needed. FFS is smart enough to consider the filter settings when writing to the sync.ffs_db files, so this case is fully supported.
You can solve the problem with FFS creating multiple database files by splitting down the task in a different way: Don't use the sub folders for synchronization, but the base folders, just as you would have done in first place. Then enter filter settings to only sync a sub-set. The include/exclude filter settings are already applied during directory traversal and will therefore limit the required memory, but only a single database file will be needed. FFS is smart enough to consider the filter settings when writing to the sync.ffs_db files, so this case is fully supported.