I have a local folder (40000 Files) and a remote folder on a network share. I just want to mirror all changes from local to remote, using RealTimeSync.
Detection of changes is very fast, and FreeFilySync gets launched quickly. But then, every time FreeFileSync starts up, its first step is to rescan the whole target folder (40000 files on remote network share) from scratch. This alone takes about 6 minutes and is unacceptable for my use case.
Files get modified locally like every 2 minutes and the worst mirroring latency we could accept is 10 to 20 seconds, including any scans and the synching.
Is there any way to configure FreeFileScan so the target does never (or rarely) get rescanned from scratch? The remote dir is modified by FreeFileSync exclusively so whenever FreeFileSync restarts, it will find the folder in exactly the state it has previously left it. This makes the "rescan from scratch" quite redundant.
Also, RealTimeScan does probably know what files got modified locally. Any way to use this information to have FreeFileScan do only a partial rescan, of the concerned portion of the huge directory tree?
Any hints are greatly appreciated. Thank you, and best regards!
Stefan
FreeFileSync rescans target folder every time
- Posts: 3
- Joined: 21 Aug 2020
- Posts: 4056
- Joined: 11 Jun 2019
You could break the monitoring and syncs up into chunks, unless you have a single folder with 40,000 files. If the local files change so often, you are probably better off running a sync once every hour, or other specified time, rather than as changes occur
- Posts: 3
- Joined: 21 Aug 2020
Thank you for your quick reply.
Splitting this up is not going to work either. Rescanning the whole target directory from scratch over and over again just seems the wrong approach to me when we talk about "real-time synchronization". It wastes a lot of resources, and it is still to slow, even if I break the job up into 20 top-level chunks.
I fear that the whole approach of the RealTimeSync / FreeFileSync is not suitable for my use case. What I am looking for is some kind of "RAID 1 across ethernet", so, when a file gets stored on one drive, it must be mirrored to a drive on another machine "immediately" (within 10 seconds at least), and it would have to work for huge directories. 40000 files may not even be the end of it; it may soon be directories containing 100000 or 200000 files.
This may be hard to implement on Windows indeed. If the RealTimeSync gave me an output (what exactly changed), I could attach a different executable to it, which would then just replay the recorded changes on the target drive, without ever rescanning or even looking at the target folder (just recklessly "throwing everything over there").
FreeFileSync looks great for many other use cases, like an occasional backup. I guess I am going to use it for that. So, thank you, and I will keep my eyes open for new features!
Splitting this up is not going to work either. Rescanning the whole target directory from scratch over and over again just seems the wrong approach to me when we talk about "real-time synchronization". It wastes a lot of resources, and it is still to slow, even if I break the job up into 20 top-level chunks.
I fear that the whole approach of the RealTimeSync / FreeFileSync is not suitable for my use case. What I am looking for is some kind of "RAID 1 across ethernet", so, when a file gets stored on one drive, it must be mirrored to a drive on another machine "immediately" (within 10 seconds at least), and it would have to work for huge directories. 40000 files may not even be the end of it; it may soon be directories containing 100000 or 200000 files.
This may be hard to implement on Windows indeed. If the RealTimeSync gave me an output (what exactly changed), I could attach a different executable to it, which would then just replay the recorded changes on the target drive, without ever rescanning or even looking at the target folder (just recklessly "throwing everything over there").
FreeFileSync looks great for many other use cases, like an occasional backup. I guess I am going to use it for that. So, thank you, and I will keep my eyes open for new features!
- Posts: 2
- Joined: 24 Aug 2020
Did you solve the problem? I am also montioring a folder with 80k files and only change 1-3 files occasionally, but every time it rescans the whole 80k files.
- Posts: 1037
- Joined: 8 May 2006
As it is FFS has to monitor both the source & destination.
So why not just monitor the source?
If date/time (or archive bit) is sufficient, just check for dates > a particular date (or files with archive bit set).
And have that run periodically, on demand, or whatever your needs are.
So why not just monitor the source?
If date/time (or archive bit) is sufficient, just check for dates > a particular date (or files with archive bit set).
If date > last_update, COPY files to destination
- Posts: 3
- Joined: 21 Aug 2020
We have not managed to solve this on the basis of FreeFileSync / RealTimeSync. We are going to implement our own tool from scratch now, based on watching the the NTFS Change Journals. Whenever there is a change, it is going to be "thrown" to the mirror directory, recklessly, without the mirror being rescanned. Every now and then, we are going to do a full sync ("robocopy /MIR" style) in order to recover from previously failed transactions.
I am using FFS anyway, but for different tasks, so I think that it is a neat tool, but it is not suitable for "extremely low-latency real-time mirroring over LAN".
I am using FFS anyway, but for different tasks, so I think that it is a neat tool, but it is not suitable for "extremely low-latency real-time mirroring over LAN".
- Posts: 25
- Joined: 25 Jul 2020
Hi Stefano
For your problem, did you consider to test Syncthing ? Very good in real-time mirroring over LAN
For your problem, did you consider to test Syncthing ? Very good in real-time mirroring over LAN
- Posts: 4056
- Joined: 11 Jun 2019
Yeah, this situation is really described by your example, "RAID 0 over (network)". Which, isn't supported and there is no solution out there as far as I am aware. Most environments as you described typically undergo a periodic backup, not real-time.
- Posts: 3
- Joined: 26 Sep 2020
Hello,
also looking into synchronizing quicky two folders with many files (160000) but only twice a day.
Currently using jfilesync which is not realtime but fast to find the changes (because using a client and a server to get the file lists as opposed to a share).
Seeing how fast everything/voidtools can get file lists, using the NTFS journal would be much faster I guess.
I have seen on the dsynchronize mentioned on their forum. It may be what the poster was looking for:
http://dimio.altervista.org/eng/dsynchronize/dsynchronize.html
It is using the NTFS journals, has a realtime mode I guess.
It would be great if freefilesync could use a server/client mode too to get the file lists quickly on both sides of the sync (please correct me if it can already).
Thanks
also looking into synchronizing quicky two folders with many files (160000) but only twice a day.
Currently using jfilesync which is not realtime but fast to find the changes (because using a client and a server to get the file lists as opposed to a share).
Seeing how fast everything/voidtools can get file lists, using the NTFS journal would be much faster I guess.
I have seen on the dsynchronize mentioned on their forum. It may be what the poster was looking for:
http://dimio.altervista.org/eng/dsynchronize/dsynchronize.html
It is using the NTFS journals, has a realtime mode I guess.
It would be great if freefilesync could use a server/client mode too to get the file lists quickly on both sides of the sync (please correct me if it can already).
Thanks