I would like to ask if FFS can add multiple targets to an one-way copy/backup job.
Reason: I would like to create 2-3 backup copies of video project files onto external 8TB USB drives. to fill one drive takes about 16h (at 100MB/sec), and to make 2-3 copies takes 2-3 times as long using just one computer. since the source volume is plenty fast, it would be nice to be able to specify multiple targets at once, and use the multiple threads feature to push data to all target drives at once.
is this possible? if yes, soon?
FR: multiple targets (solved)
- Posts: 4
- Joined: 6 Oct 2020
- Posts: 2451
- Joined: 22 Aug 2012
Simply select multiple left-right folder pairs.
Your left base folder can each time be one and the same source folder; the right base folder each time a different destination folder.
You may get a warning that there is an overlap between locations of the different pairs. This is correct, as each pair comprises the source base folder (and its subfolders). However, if you run a left-to-right one-way sync (Mirror or Update) variant you can safely ignore that warning, as the contents of the overlapping source location will not change due to the sync.
Your left base folder can each time be one and the same source folder; the right base folder each time a different destination folder.
You may get a warning that there is an overlap between locations of the different pairs. This is correct, as each pair comprises the source base folder (and its subfolders). However, if you run a left-to-right one-way sync (Mirror or Update) variant you can safely ignore that warning, as the contents of the overlapping source location will not change due to the sync.
- Site Admin
- Posts: 7212
- Joined: 9 Dec 2007
This warning should not be ignored in general and for Mirror-variant syncs it won't come up. It only comes up if there are multiple accesses to a folder (from different folder pairs) at least one of which is a write.However, if you run a left-to-right one-way sync (Mirror or Update) variant you can safely ignore that warning, as the contents of the overlapping source location will not change due to the sync. Plerry, 07 Oct 2020, 07:48
There is one caveat though: The warning is (yet) a bit simplistic and doesn't consider the exclude filters. So you may have prevented the trigger condition with excludes, but the warning can still come up. Only then it's save to ignore.
- Posts: 2451
- Joined: 22 Aug 2012
(Plerry)>>... you can safely ignore that warning ...
(Zenju)>This warning should not be ignored in general ...
My statement referred to the specific use case @jgeerds described.
It was certainly not intended as a general statement.
(Zenju)>... and for Mirror-variant syncs it won't come up. It only comes up if there are multiple accesses to a folder (from different folder pairs) at least one of which is a write. ...
Did not know that. Learned something again.
My experience with said warning mostly concerns cases in which, as @Zenju describes, the (potential) conflicts are resolved/prevented via Include and/or Exclude Filter settings.
(Zenju)>This warning should not be ignored in general ...
My statement referred to the specific use case @jgeerds described.
It was certainly not intended as a general statement.
(Zenju)>... and for Mirror-variant syncs it won't come up. It only comes up if there are multiple accesses to a folder (from different folder pairs) at least one of which is a write. ...
Did not know that. Learned something again.
My experience with said warning mostly concerns cases in which, as @Zenju describes, the (potential) conflicts are resolved/prevented via Include and/or Exclude Filter settings.
- Posts: 4
- Joined: 6 Oct 2020
Hi Pierry,
I did setup a job as you mentioned. it fixes the problem from a convenience standpoint, but not from a speed efficiency standpoint, since the list on the left will be handled sequentially, not in parallel.
I also tried to increase the number of threads for the source to 2x, while keeping the number of threads for each source to 1x, but it has the opposite effect, causing thread thrashing on the target USB drives, dropping transfer down to 50%
I did setup a job as you mentioned. it fixes the problem from a convenience standpoint, but not from a speed efficiency standpoint, since the list on the left will be handled sequentially, not in parallel.
I also tried to increase the number of threads for the source to 2x, while keeping the number of threads for each source to 1x, but it has the opposite effect, causing thread thrashing on the target USB drives, dropping transfer down to 50%
- Posts: 4
- Joined: 6 Oct 2020
I tried launching a second instance of FFS, but the source directory is "locked" by the first FFS instance, and the second instance won't touch it until the first is done. Is there a way to ignore the temp lock file that is created?
- Posts: 2451
- Joined: 22 Aug 2012
You can not tell FFS to ignore lock files.
However, you can tell FFS it should not create lock files.
(Set the LockDirectoriesDuringSync flag to False. See the escription in the Expert Settings Manual page)
This will allow simultaneous access to locations shared between multiple active FFS instances.
Doing so can give undesired effects if any of said FFS instances would write to such shared location(s), but should be safe if all said FFS instances only read from such shared location(s).
Note that LockDirectoriesDuringSync and other flags are global flags, and apply to all FFS syncs.
So, be cautious!
However, you can tell FFS it should not create lock files.
(Set the LockDirectoriesDuringSync flag to False. See the escription in the Expert Settings Manual page)
This will allow simultaneous access to locations shared between multiple active FFS instances.
Doing so can give undesired effects if any of said FFS instances would write to such shared location(s), but should be safe if all said FFS instances only read from such shared location(s).
Note that LockDirectoriesDuringSync and other flags are global flags, and apply to all FFS syncs.
So, be cautious!
- Posts: 4
- Joined: 6 Oct 2020
This is doing exactly what I wanted. perfect, thx so much