Transferring huge amounts of data (170TB)

Discuss new features and functions
Posts: 1
Joined: 26 Feb 2024

ochompsky

I have a massive, personal unraid array that is 174 TB with over half a million files.

I am building a replacement array and going towards the route of transferring via a FreeFileSync docker container. I know it will take a while (~20 days at my network speed) but im wondering - does the amount of data matter? Or will it keep chugging along?

Also, while it's running in 'update' mode, if I add new files on the left, will it automatically know they were added even though it's in the middle of an update? Or should I keep both systems to readonly during this copy?

Also, I see db is disabled by default for mirror, any benefit to keep it?
User avatar
Posts: 3603
Joined: 11 Jun 2019

xCSxXenon

https://freefilesync.org/faq.php#limitations
If you have a GB of RAM free, you should be fine.
Files added during the sync won't be transferred by FFS until another comparison/sync is performed. You could put put them in readonly mode, or simply run another sync to catch any remnants.
It seems odd to me that using a database is disabled by default for mirror, I can't think of any reason not to use it.