Hello,
I have several ffs batch jobs scheduled with windows task scheduler. All have
- the same source drive & source directories
- the same target drive
- but different target directories.
Will I run into any problems (file locks etc.) if the windows task scheduler causes several batch jobs to start at the same time, or if a second one is started while the first one is still running?
Kind regards,
tim.
Several ffs batch jobs running at same time?
- Posts: 6
- Joined: 17 Mar 2021
- Posts: 2451
- Joined: 22 Aug 2012
If there is a (full or partial) overlap between the left and/or the right locations of different FFS syncs, you will run into problem you list yourself.
• The most direct way to overcome the problem is assuring your scheduled tasks do never timewise overlap
Assuming your FFS syncs are left-to-right Mirror or Update (uni-directional) syncs,
• an alternative is to define a single sync configuration of multiple left-right folder pairs; the individual left-right pairs consisting of the left-right pairs of all present individual sync configurations, and run this combined sync configuration as a single scheduled task. FFS will still warn that there is an overlap between parts of the multiple folder pairs, but this can be safely ignored if my assumption applies.
• yet another alternative is to disable directory locking (via the *.ffs_lock files). Again, if my assumption applies this should be safe to do. However, note that LockDirectoriesDuringSync is a global setting, so setting it to false may be unsafe for other syncs you may want to run.
• The most direct way to overcome the problem is assuring your scheduled tasks do never timewise overlap
Assuming your FFS syncs are left-to-right Mirror or Update (uni-directional) syncs,
• an alternative is to define a single sync configuration of multiple left-right folder pairs; the individual left-right pairs consisting of the left-right pairs of all present individual sync configurations, and run this combined sync configuration as a single scheduled task. FFS will still warn that there is an overlap between parts of the multiple folder pairs, but this can be safely ignored if my assumption applies.
• yet another alternative is to disable directory locking (via the *.ffs_lock files). Again, if my assumption applies this should be safe to do. However, note that LockDirectoriesDuringSync is a global setting, so setting it to false may be unsafe for other syncs you may want to run.
- Posts: 6
- Joined: 17 Mar 2021
Hello PIerry,
thanks for the quick response.
I agree that the safest way would be to schedule them not to overlap. However, if the task scheduled in the windows task scheduler cannot run, because the machine isn't up, then the scheduler will start those tasks as soon as he can, and this can then lead to several backup jobs being started more or less simultaneously (no idea how the scheduler schedules his missed jobs).
I will look into your suggestion to disable directory locking; sounds plausible.
But can something actually go wrong in my situation or will FFS just skip files in one job if another job is backing them up at this very moment?
Cheers,
tim.
thanks for the quick response.
I agree that the safest way would be to schedule them not to overlap. However, if the task scheduled in the windows task scheduler cannot run, because the machine isn't up, then the scheduler will start those tasks as soon as he can, and this can then lead to several backup jobs being started more or less simultaneously (no idea how the scheduler schedules his missed jobs).
I will look into your suggestion to disable directory locking; sounds plausible.
But can something actually go wrong in my situation or will FFS just skip files in one job if another job is backing them up at this very moment?
Cheers,
tim.
- Posts: 6
- Joined: 17 Mar 2021
PS -
the manual says that in my scenario "other instances are queued to wait."
If that means that the other FFS jobs are merely paused and wait for the lock to be lifted and will then resume their work; that would not be a problem.
Anyone know if my interpretation is correct?
Cheers,
tim
the manual says that in my scenario "other instances are queued to wait."
If that means that the other FFS jobs are merely paused and wait for the lock to be lifted and will then resume their work; that would not be a problem.
Anyone know if my interpretation is correct?
Cheers,
tim
- Posts: 2451
- Joined: 22 Aug 2012
> But can something actually go wrong in my situation or will FFS just skip files in one job if another job is backing them up at this very moment?
Under the assumptions I mentioned (the syncs only modifying the non-overlapping locations of the different syncs) disabling directory locking does not pose a risk for the your presently considered syncs.
But, as mentioned, as disabling directory locking is a (user bound) global setting, it may pose a risk for presently not considered other syncs being run under the same user account.
> If that means that the other FFS jobs are merely paused and wait for the lock to be lifted and will then resume their work; that would not be a problem
If that can cause problems strongly depends on your Task Scheduler settings in the Settings tab.
You don't want task piling up, your task (try to) run for a too long time, or conversely be stopped/killed before it had the chance to perform an actual sync.
Probably still the simplest way is my second bullet: to create a single FFS sync configuration with all the syncs in scope, and to be run as a single scheduled task. Then there is no need to disable directory locking, and also no need for sync jobs to wait until (partially overlapping) sync jobs have ended.
Under the assumptions I mentioned (the syncs only modifying the non-overlapping locations of the different syncs) disabling directory locking does not pose a risk for the your presently considered syncs.
But, as mentioned, as disabling directory locking is a (user bound) global setting, it may pose a risk for presently not considered other syncs being run under the same user account.
> If that means that the other FFS jobs are merely paused and wait for the lock to be lifted and will then resume their work; that would not be a problem
If that can cause problems strongly depends on your Task Scheduler settings in the Settings tab.
You don't want task piling up, your task (try to) run for a too long time, or conversely be stopped/killed before it had the chance to perform an actual sync.
Probably still the simplest way is my second bullet: to create a single FFS sync configuration with all the syncs in scope, and to be run as a single scheduled task. Then there is no need to disable directory locking, and also no need for sync jobs to wait until (partially overlapping) sync jobs have ended.
- Posts: 6
- Joined: 17 Mar 2021
Hi PIerry,
thanks again for your feedback. I agree that your second bullet is the easiest solution. Unfortunately I do not think it will work for me, because:
imagine having 3 FFS jobs planned:
- one jobs runs every 10 minutes writing to location A
- one job running every hour writing to location B and
- one job running every day writing to location C.
Now the machine is down for a day, so next morning the task scheduler will start 3 jobs; and I do not see how I can squeeze all 3 tasks into one job.
Therefore either I fiddle around with the directory lock, or, if the jobs will just sit and wait if a directory is locked, I leave it as it is (which I would prefer).
Cheers,
tim.
thanks again for your feedback. I agree that your second bullet is the easiest solution. Unfortunately I do not think it will work for me, because:
imagine having 3 FFS jobs planned:
- one jobs runs every 10 minutes writing to location A
- one job running every hour writing to location B and
- one job running every day writing to location C.
Now the machine is down for a day, so next morning the task scheduler will start 3 jobs; and I do not see how I can squeeze all 3 tasks into one job.
Therefore either I fiddle around with the directory lock, or, if the jobs will just sit and wait if a directory is locked, I leave it as it is (which I would prefer).
Cheers,
tim.