Hi,
I want to know : what happens when file in Source get corrupted? (Mirror Protocol; from Source to Target)
Please help me
Zeno
Source file is corrupted
- Posts: 4
- Joined: 19 Apr 2022
- Posts: 4056
- Joined: 11 Jun 2019
FFS will sync that corrupted file to the destination. FFS does not eliminate the need for backups. Look into versioning in FFS
- Posts: 4
- Joined: 19 Apr 2022
Thanks for quick reply,
Yes I got the logic.
With versioning there is always problem of Volume used. And one will have to preserve all the versions because one cant be sure in which version you don't have Corrupted file.
Corrupted files have some characteristic of corruption. So isn't it possible that software can detect and don't copy to destination. Isn't it possible.
Is there any software that protect against copying Corrupted file to the Destination?
Thanks again
Yes I got the logic.
With versioning there is always problem of Volume used. And one will have to preserve all the versions because one cant be sure in which version you don't have Corrupted file.
Corrupted files have some characteristic of corruption. So isn't it possible that software can detect and don't copy to destination. Isn't it possible.
Is there any software that protect against copying Corrupted file to the Destination?
Thanks again
- Posts: 4056
- Joined: 11 Jun 2019
Well you wouldn't have to preserve all versions because only the latest one could be corrupted. They only way you would corrupt multiple versions is if you made changes and synced them multiple times, but you can't make changes because it's corrupted. FFS only saves versions when overwriting/deleting an existing file, not every time a sync is ran.
Detecting corruption is a really complex thing. Sure, it's easy to detect corruption in the header, but then you have to have a header template for every file type ever. Also, corruption is unlikely to be in the header. BadPeggy is the only software I have used for detecting corruption, and that's a whole program dedicated to a single file type.
Detecting corruption is a really complex thing. Sure, it's easy to detect corruption in the header, but then you have to have a header template for every file type ever. Also, corruption is unlikely to be in the header. BadPeggy is the only software I have used for detecting corruption, and that's a whole program dedicated to a single file type.
- Posts: 4
- Joined: 19 Apr 2022
Thanks for the long explanation.
It added to my knowledge.
What I understand is:
Suppose I took back up of a txt file (source to target) and I don't edit or change the txt file in Source.
Now if the file in source is corrupted and i take backup second time, the corrupted file will not back up. So whatever times i take back up the Corrupted fill will not be over written to Destination.
Im I OK?
Thanks in advance.
It added to my knowledge.
What I understand is:
Suppose I took back up of a txt file (source to target) and I don't edit or change the txt file in Source.
Now if the file in source is corrupted and i take backup second time, the corrupted file will not back up. So whatever times i take back up the Corrupted fill will not be over written to Destination.
Im I OK?
Thanks in advance.
- Posts: 4056
- Joined: 11 Jun 2019
You are mostly correct. If you back it up and don't make changes, FFS won't ever sync it again.
NOW, if the source gets corrupted and its timestamp changes, FFS will see that difference and sync it to the destination. This behavior can be a little different based on how you set up FFS to detect and handle differences. This is not a problem, because you have versions! The latest versioned copy should now be the a healthy copy. The only way you would overwrite all the versioned copies with the corrupted copy is if you sync, the source corrupts, you sync again, the source gets its data updated again, and then you sync again, over and over until you overwrite the number of versions you specified to keep. The risk is fairly low, as there must be insanely catastrophic failure to get to that point.
With that said, a proper backup consists of an offsite/air-gapped backup. A file that is months out of date is better than no file at all.
NOW, if the source gets corrupted and its timestamp changes, FFS will see that difference and sync it to the destination. This behavior can be a little different based on how you set up FFS to detect and handle differences. This is not a problem, because you have versions! The latest versioned copy should now be the a healthy copy. The only way you would overwrite all the versioned copies with the corrupted copy is if you sync, the source corrupts, you sync again, the source gets its data updated again, and then you sync again, over and over until you overwrite the number of versions you specified to keep. The risk is fairly low, as there must be insanely catastrophic failure to get to that point.
With that said, a proper backup consists of an offsite/air-gapped backup. A file that is months out of date is better than no file at all.
- Posts: 4
- Joined: 19 Apr 2022
OK, Thanks again,
So how to set FFS so that it cannot detect timestamp changes and so it wouldn't sync corrupted file to destination. What settings should i do?
And thanks for BadPeggy. But it is only for JPEG extension.
So how to set FFS so that it cannot detect timestamp changes and so it wouldn't sync corrupted file to destination. What settings should i do?
And thanks for BadPeggy. But it is only for JPEG extension.
- Posts: 4056
- Joined: 11 Jun 2019
Yes, I know? That's why I said, "that's a whole program dedicated to a single file type."
Setting up FFS to not detect timestamps isn't the fix either. Read the whole thread again. FFS cannot do what you are wanting. FFS will sync corrupted files to the destination and you can't stop that.
Setting up FFS to not detect timestamps isn't the fix either. Read the whole thread again. FFS cannot do what you are wanting. FFS will sync corrupted files to the destination and you can't stop that.