Improved support for large file sets

Discuss new features and functions
Posts: 1
Joined: 11 Oct 2022

extremebias

HI suggestion for improvement:

Crash-resistant in-progress comparison for large file sets
Review strategy for handling very large file sets to minimise elapsed time

I just wanted to copy my Apple Time Machine backup of some 2.65TB on an existing drive running out of space to a new, larger drive. I tried the drag and drop and that completed. But I had tried rsync first and it appeared to invent the need for far more space (perhaps I needed to use the sparse file option) as after a week of processing it ran out of space on the new disk which had twice the capacity!

I switched back to good ol' drag and drop and it completed after a week. I found FFS and thought i could use it to compare. After 4 days and using an in-memory database of some 56GB it crashed (even though these was ample free disk space).

I think there is a great opportunity to consider what are the time bounds for very large file sets. For example, to determine if it was sensible to split the app into two apps and have multiple CPUs engaged to look at the from and to location and walk directory trees could it make the overall time less if the two apps then shared information between them and then decided what was necessary to change/synch? Could the app perform some testing and provide an option to perform some calculation and examine choke points for reading/writing/transmitting, etc?
User avatar
Site Admin
Posts: 7211
Joined: 9 Dec 2007

Zenju

After 4 days and using an in-memory database of some 56GB it crashed (even though these was ample free disk space). extremebias, 11 Oct 2022, 00:25
Sounds like macOS is crashing after running out of memory due to its buggy file system caching. The latest FFS 11.26 implements a workaround: viewtopic.php?t=8039