Keep previous versions of files
- Site Admin
- Posts: 7210
- Joined: 9 Dec 2007
> Would it make sense to have variables like these?
It's probably a sign that a thread has become too long, if items discussed at
the beginning are rediscovered at the end ;)
@mfreedberg:
It seems you're comparing the modification time of the revisioned file against
the modification time of the current active version, which is unrelated.
Generally FFS preserves file modification times.
It's probably a sign that a thread has become too long, if items discussed at
the beginning are rediscovered at the end ;)
@mfreedberg:
It seems you're comparing the modification time of the revisioned file against
the modification time of the current active version, which is unrelated.
Generally FFS preserves file modification times.
- Posts: 74
- Joined: 17 Mar 2008
@Zenju - the current active version does have the right created date and
last modified date, but the revisioned file does not. What do you see in your
tests with revisioned files?
What date *should* we see on the revisioned file? I totally agree, FFS
normally preserves the file modification times, but I do not see that to be to
the case on the revisioned file.
Maybe this should be a new thread (grin).
last modified date, but the revisioned file does not. What do you see in your
tests with revisioned files?
What date *should* we see on the revisioned file? I totally agree, FFS
normally preserves the file modification times, but I do not see that to be to
the case on the revisioned file.
Maybe this should be a new thread (grin).
- Site Admin
- Posts: 7210
- Joined: 9 Dec 2007
There is probably a misunderstanding as to what versioning does: When FFS
updates a file, it revisiones the old file first, by moving it to a custom
folder, preserving it's modification time. Then the new version is copied
over. This is the sequence of steps, at least conceptually. This means the
versioning folder never has the most current version, but only the second-most
current one.
updates a file, it revisiones the old file first, by moving it to a custom
folder, preserving it's modification time. Then the new version is copied
over. This is the sequence of steps, at least conceptually. This means the
versioning folder never has the most current version, but only the second-most
current one.
> It's probably a sign that a thread has become too long
It's a sure sign of laziness and disrespect on my side, my apologies.
Regarding the %variables% issue, I liked them because using them in the
Deletion handling dropdown wouldn't clutter the UI and would be transparent to
those who did not need them. Nevertheless, I understand the arguments made for
simplicity.
It's a sure sign of laziness and disrespect on my side, my apologies.
Regarding the %variables% issue, I liked them because using them in the
Deletion handling dropdown wouldn't clutter the UI and would be transparent to
those who did not need them. Nevertheless, I understand the arguments made for
simplicity.
- Posts: 74
- Joined: 17 Mar 2008
thanks for the update on the approach:
> When FFS updates a file, it revisiones the old file first, by moving it to a
custom folder, preserving it's modification time. Then the new version is
copied over. This is the sequence of steps, at least conceptually. This means
the versioning folder never has the most current version, but only the second-
most current one.
The problem I am seeing on my system is that the old file, when moved to the
custom folder and also renamed, does not seem to be retaining its last
modified date. I definitely understand that it is the second most current
version of the file, but I am not sure that the process is successfully
retaining the last modified date.
I will do some more testing at home, but if you can also verify on your end
that files moved to the custom folder and renamed retain their last modified
date, that would be great!
> When FFS updates a file, it revisiones the old file first, by moving it to a
custom folder, preserving it's modification time. Then the new version is
copied over. This is the sequence of steps, at least conceptually. This means
the versioning folder never has the most current version, but only the second-
most current one.
The problem I am seeing on my system is that the old file, when moved to the
custom folder and also renamed, does not seem to be retaining its last
modified date. I definitely understand that it is the second most current
version of the file, but I am not sure that the process is successfully
retaining the last modified date.
I will do some more testing at home, but if you can also verify on your end
that files moved to the custom folder and renamed retain their last modified
date, that would be great!
- Site Admin
- Posts: 7210
- Joined: 9 Dec 2007
> if you can also verify
I checked, it works fine in my tests.
I checked, it works fine in my tests.
- Posts: 5
- Joined: 12 Sep 2012
Hi all!
Sorry to bother but i prefered the way FFS versioned the files and folders in
version 5.6 .
I could easily overwrite a full version structure tu a full backed up folder
and recover its full state in time.
To increment a version number in filename is to subvert the original data
since version 5.7 is changing the original filename.
I would sugest that the new way of versioning could give the user the option
to choose wich way he wants to versions its file changes.
I know its difficult to please everyone and for that im still using 5.6
instead 5.7.
Anyway ass i see that you are great in accepting suggestions here is mine.
I also want to give my congratulations for the amazing application that is
FFS. I've tried a lot of them and for me FFS is the Top.
Thank you all.
Best Regards
Jorge
Sorry to bother but i prefered the way FFS versioned the files and folders in
version 5.6 .
I could easily overwrite a full version structure tu a full backed up folder
and recover its full state in time.
To increment a version number in filename is to subvert the original data
since version 5.7 is changing the original filename.
I would sugest that the new way of versioning could give the user the option
to choose wich way he wants to versions its file changes.
I know its difficult to please everyone and for that im still using 5.6
instead 5.7.
Anyway ass i see that you are great in accepting suggestions here is mine.
I also want to give my congratulations for the amazing application that is
FFS. I've tried a lot of them and for me FFS is the Top.
Thank you all.
Best Regards
Jorge
- Posts: 5
- Joined: 12 Sep 2012
Sorry for my typos :\
- Site Admin
- Posts: 7210
- Joined: 9 Dec 2007
I don't see how you could take advantage of the old (<5.7) versioning scheme:
If you want to restore a specific state, you would have to start with the
current version, and then manually copy over all version folders beginning
with the most recent down to the date you want to restore. Also this approach
would leave files behind that have been newly created in between "now" and the
date you are restoring.
It's possible I'm overlooking an important scenario, but right now I don't see
much of a functional loss. But clearly, the new scheme places even more
emphasis on single file revisioning.
If you want to restore a specific state, you would have to start with the
current version, and then manually copy over all version folders beginning
with the most recent down to the date you want to restore. Also this approach
would leave files behind that have been newly created in between "now" and the
date you are restoring.
It's possible I'm overlooking an important scenario, but right now I don't see
much of a functional loss. But clearly, the new scheme places even more
emphasis on single file revisioning.
- Posts: 5
- Joined: 12 Sep 2012
Ok In my options i have
Mirror
Versioning
With mirror the destination always equals the origin. In thbe versioning all
the files that are deleted or changed are moved to a version folder with a
date.
If i want to recover the full origin to a certain date that i have the version
i just need to copy the version folder over the full mirrored version.
I have several schedules, one per day of week in order to have recover folders
in versioning (5.6).
:)
Mirror
Versioning
With mirror the destination always equals the origin. In thbe versioning all
the files that are deleted or changed are moved to a version folder with a
date.
If i want to recover the full origin to a certain date that i have the version
i just need to copy the version folder over the full mirrored version.
I have several schedules, one per day of week in order to have recover folders
in versioning (5.6).
:)
- Posts: 5
- Joined: 12 Sep 2012
Im using FFS as a file backup solution :)
- Site Admin
- Posts: 7210
- Joined: 9 Dec 2007
> copy the version folder over the full mirrored version.
There are two types of versioning that need to be distinguished:
1. single file versioning: this is possible with both the old and the newer versioning scheme. The newer scheme however makes it easier since all versions are listed in the same folder.
2. restore all files for a given time: This is possible neither with the old nor with the new versioning scheme. In the old scheme one might pull stunts like described in my last post, but essentially this is no real solution.
This is how it currently looks. 2 doesn't sound very good, but if there is a
better way, I'll probably implement it once I become aware of it.
There are two types of versioning that need to be distinguished:
1. single file versioning: this is possible with both the old and the newer versioning scheme. The newer scheme however makes it easier since all versions are listed in the same folder.
2. restore all files for a given time: This is possible neither with the old nor with the new versioning scheme. In the old scheme one might pull stunts like described in my last post, but essentially this is no real solution.
This is how it currently looks. 2 doesn't sound very good, but if there is a
better way, I'll probably implement it once I become aware of it.
- Posts: 4
- Joined: 4 Apr 2010
Hmm, just discovered new versioning algorithm after upgrading to 5.7. Guess
it's too late to vote no.
I guess it's age, but I now hate change unless it brings significant benefits.
With new versioning algorithm, you still can't properly restore because there
is no full backup to restore from, so what's the point?
Rant over, on a more constructive note, read up 'DeLorean Copy' at http://schinagl.priv.at/nt/ln/ln.html#deloreancopy. It may give you insight on how to improve FreeFileSync
further.
it's too late to vote no.
I guess it's age, but I now hate change unless it brings significant benefits.
With new versioning algorithm, you still can't properly restore because there
is no full backup to restore from, so what's the point?
Rant over, on a more constructive note, read up 'DeLorean Copy' at http://schinagl.priv.at/nt/ln/ln.html#deloreancopy. It may give you insight on how to improve FreeFileSync
further.
- Posts: 1
- Joined: 26 Aug 2008
Thought I would add that I upgraded to 5.7 and didn't think to check my batch
runs afterwards. Noticed a couple weeks later that all of my versioning wasn't
working. I was getting folders but no files. I have everything setup using
ffs_batch files that run on a schedule. I see the format of the xml changed
and it left it working but not fully and with no messages that I've seen. I
just updated all my ffs_batch files and will be checking the next couple days
to make sure I didn't miss anything. Just something to be aware of if you
upgrade, make sure you open your files in FFS and save them to reformat the
xml. Or as I did, save one and then manually update the rest with the updated
format.
runs afterwards. Noticed a couple weeks later that all of my versioning wasn't
working. I was getting folders but no files. I have everything setup using
ffs_batch files that run on a schedule. I see the format of the xml changed
and it left it working but not fully and with no messages that I've seen. I
just updated all my ffs_batch files and will be checking the next couple days
to make sure I didn't miss anything. Just something to be aware of if you
upgrade, make sure you open your files in FFS and save them to reformat the
xml. Or as I did, save one and then manually update the rest with the updated
format.
- Site Admin
- Posts: 7210
- Joined: 9 Dec 2007
> I guess it's age, but I now hate change unless it brings significant
benefits. With new versioning algorithm, you still can't properly restore
because there is no full backup to restore from, so what's the point?
If you want to find all versions for a particular file, you got them now all
listed next to each other. So it's at least one improvement without giving up
much.
> DeLorean Copy
Looks like what has been discussed here: viewtopic.php?t=1843
> Noticed a couple weeks later that all of my versioning wasn't working
This is unfortunately a very lame config migration bug.
> make sure you open your files in FFS and save them to reformat the xml
And this is the workaround until the fix in the next release.
benefits. With new versioning algorithm, you still can't properly restore
because there is no full backup to restore from, so what's the point?
If you want to find all versions for a particular file, you got them now all
listed next to each other. So it's at least one improvement without giving up
much.
> DeLorean Copy
Looks like what has been discussed here: viewtopic.php?t=1843
> Noticed a couple weeks later that all of my versioning wasn't working
This is unfortunately a very lame config migration bug.
> make sure you open your files in FFS and save them to reformat the xml
And this is the workaround until the fix in the next release.
- Posts: 1
- Joined: 11 May 2009
Zeniju: i think he problem with versioning currently is with the limit: now I can limit the number of versions. The problem is if I have small files which change frequently and other large files change rarely. If I set 10 for limit it is not ok: I will have 10 versions of the small file but for example the oldest is from yesterday. It is not enough. And I have 10 versions of the large one but the oldest is a year old for example.
The other problem now that if i create a file and the run ffs it is mirreored for exmaple. Then I delete this file. It goes to the versioning folder. And that's it. It won't be deleted EVER. Right? If the user could set "keep versions for 2 weeks' then It would be deleted after 2 weeks. Hmm?
Br,
TG
The other problem now that if i create a file and the run ffs it is mirreored for exmaple. Then I delete this file. It goes to the versioning folder. And that's it. It won't be deleted EVER. Right? If the user could set "keep versions for 2 weeks' then It would be deleted after 2 weeks. Hmm?
Br,
TG
- Posts: 7
- Joined: 12 Oct 2010
For your first problem, you may like to consider setting two set of sync with both the same source and target folders. You can limit the first set to file size less than a certain size and limit the version to, say 50. You can then limit the second set to file size greater than a certain size and limit the version to, say 5.
I did post a vb script that purge the folder based on the age of the file. For example, by defining TimeGaps=Array(1,2,8,16,32,90)
The 1st copy is older than 1 day but younger than 2 days, if indeed there is such a file
The 2nd copy is older than 2 days but younger than 3 days, if indeed there is such a file
The 3rd copy is older than 3 days but younger than 8 days, if indeed there is such a file
The 4th copy is older than 8 days but younger than 16 days, if indeed there is such a file
The 5nd copy is older than 16 days but younger than 32 days, if indeed there is such a file
See viewtopic.php?t=1827&p=7447#p7447
I did post a vb script that purge the folder based on the age of the file. For example, by defining TimeGaps=Array(1,2,8,16,32,90)
The 1st copy is older than 1 day but younger than 2 days, if indeed there is such a file
The 2nd copy is older than 2 days but younger than 3 days, if indeed there is such a file
The 3rd copy is older than 3 days but younger than 8 days, if indeed there is such a file
The 4th copy is older than 8 days but younger than 16 days, if indeed there is such a file
The 5nd copy is older than 16 days but younger than 32 days, if indeed there is such a file
See viewtopic.php?t=1827&p=7447#p7447
- Posts: 2
- Joined: 20 Nov 2012
Deletion handling from version 5.6 to make as a new type of "Deletion handling" = Backup.
example:
Have a file SyncName.ffs_gui with the following contents:
1 line) from \\Server\PC1\DirSync\*.* To \\PC1\DirSync
2 line) from \\Server\PC2\DirSync\*.* To \\PC2\DirSync
backup folder - \\Server\Backup.
After the first synchronization (variant=mirror) files have changed as follows:
\\Server\PC1\DirSync\File1.ini - changed content
\\Server\PC1\DirSync\DirA\File2.bin - new file
\\Server\PC2\DirSync\File1.ini - changed content
\\Server\PC2\DirSync\DirA\File2.bin - file deleted
After synchronization (variant=mirror and "Deletion handling"=Backup) we get the files in the \\Server\Backup:
\\Server\Backup\SyncName_2012-12-10_204101\1\File1.ini
\\Server\Backup\SyncName_2012-12-10_204101\2\File1.ini
\\Server\Backup\SyncName_2012-12-10_204101\2\DirA\File2.bin
If create a log file (\\Server\Backup\SyncName_2012-12-10_204101\SyncName.log), it is possible to do an automatic rollback using a batch file.
Example a compact version of the protocol (listed only new files):
\\PC1\DirSync\DirA\File2.bin
Example a full report:
B|1\\
F|\\Server\PC1\DirSync\
T|\\PC1\DirSync\
C|File1.ini
N|DirA\File2.bin
B|2\
F|\\Server\PC2\DirSync\
T|\\PC2\DirSync\
C|File1.ini
D|DirA\File2.bin
*Where:
B-backup path
F-from path
T-to path
C-change file
N-new file
D-delete file*
example:
Have a file SyncName.ffs_gui with the following contents:
1 line) from \\Server\PC1\DirSync\*.* To \\PC1\DirSync
2 line) from \\Server\PC2\DirSync\*.* To \\PC2\DirSync
backup folder - \\Server\Backup.
After the first synchronization (variant=mirror) files have changed as follows:
\\Server\PC1\DirSync\File1.ini - changed content
\\Server\PC1\DirSync\DirA\File2.bin - new file
\\Server\PC2\DirSync\File1.ini - changed content
\\Server\PC2\DirSync\DirA\File2.bin - file deleted
After synchronization (variant=mirror and "Deletion handling"=Backup) we get the files in the \\Server\Backup:
\\Server\Backup\SyncName_2012-12-10_204101\1\File1.ini
\\Server\Backup\SyncName_2012-12-10_204101\2\File1.ini
\\Server\Backup\SyncName_2012-12-10_204101\2\DirA\File2.bin
If create a log file (\\Server\Backup\SyncName_2012-12-10_204101\SyncName.log), it is possible to do an automatic rollback using a batch file.
Example a compact version of the protocol (listed only new files):
\\PC1\DirSync\DirA\File2.bin
Example a full report:
B|1\\
F|\\Server\PC1\DirSync\
T|\\PC1\DirSync\
C|File1.ini
N|DirA\File2.bin
B|2\
F|\\Server\PC2\DirSync\
T|\\PC2\DirSync\
C|File1.ini
D|DirA\File2.bin
*Where:
B-backup path
F-from path
T-to path
C-change file
N-new file
D-delete file*
- Site Admin
- Posts: 7210
- Joined: 9 Dec 2007
I'm coming back to this issue and want to see if there isn't an improvement, maybe even a "solution" to:
[404, Invalid URL: https://sourceforge.net/p/freefilesync/feature-requests/315/]
[404, Invalid URL: https://sourceforge.net/p/freefilesync/feature-requests/316/]
Summing up the discussion so far, there are two relevant naming conventions:
1. C:\REVISIONS\folder\file.txt <timestamp>.txt
2. C:\REVISIONS\<job name> <timestamp>\folder\file.txt
and two relevant limits
I. Limit versions per file
II. Cleanup after x days
Unfortunately 1 is tied to I and 2 is tied to II for performance reasons, which severely places constraints on a more generic solution. So a replacement to what is currently implemented could be 2/II.
Question: Am I missing some clever idea or are there only these two major building blocks which happen to be incompatible? Seing how the limits depend on the naming convention seems to indicate that the naming convention needs to be hard-coded.
[404, Invalid URL: https://sourceforge.net/p/freefilesync/feature-requests/315/]
[404, Invalid URL: https://sourceforge.net/p/freefilesync/feature-requests/316/]
Summing up the discussion so far, there are two relevant naming conventions:
1. C:\REVISIONS\folder\file.txt <timestamp>.txt
2. C:\REVISIONS\<job name> <timestamp>\folder\file.txt
and two relevant limits
I. Limit versions per file
II. Cleanup after x days
Unfortunately 1 is tied to I and 2 is tied to II for performance reasons, which severely places constraints on a more generic solution. So a replacement to what is currently implemented could be 2/II.
Question: Am I missing some clever idea or are there only these two major building blocks which happen to be incompatible? Seing how the limits depend on the naming convention seems to indicate that the naming convention needs to be hard-coded.
- Site Admin
- Posts: 7210
- Joined: 9 Dec 2007
Of the two limits, II is the most useful: There is an inverse correlation between the time passed since a file was deleted and the likelihood that one might need it again. A "limit versions" on the other hand is largely artificial and can only barely be justified by a weak correlation of version count and time passed.
In real-world terms: If you want to clean up your house and get rid of superfluous items, you don't set a numeric limit for each item type and throw away everything after x items (=limit versions). You also don't place a size limit and throw away everything that doesn't fit in x cubic meters (=size limit; yes one could argue that the size of the rooms is a natural size limit, but in practice if the stuff is useful, you'll see to get more space rather than throw away stuff). But you may decide to throw everything away which you haven't used for more than a year (= limit by days).
The naming conventions have the following advantages:
1.
-> easy to find a specific version of a file via a file browser
2.
-> easy to delete all versions older than a certain date (either manually or via a new limit in FFS)
-> easy to do an "undo" by copying over deleted and overwritten files from last sync (can be useful in practice even if this does not cleanup the newly created files)
-> a single revisions directory can be used by multiple folder pairs and sync jobs since the job name is part of the temporary directory name.
So reevaluating the requirements and the *feedback* I have tendency to implement 2/II replacing the current 1/I handling of revisions.
In real-world terms: If you want to clean up your house and get rid of superfluous items, you don't set a numeric limit for each item type and throw away everything after x items (=limit versions). You also don't place a size limit and throw away everything that doesn't fit in x cubic meters (=size limit; yes one could argue that the size of the rooms is a natural size limit, but in practice if the stuff is useful, you'll see to get more space rather than throw away stuff). But you may decide to throw everything away which you haven't used for more than a year (= limit by days).
The naming conventions have the following advantages:
1.
-> easy to find a specific version of a file via a file browser
2.
-> easy to delete all versions older than a certain date (either manually or via a new limit in FFS)
-> easy to do an "undo" by copying over deleted and overwritten files from last sync (can be useful in practice even if this does not cleanup the newly created files)
-> a single revisions directory can be used by multiple folder pairs and sync jobs since the job name is part of the temporary directory name.
So reevaluating the requirements and the *feedback* I have tendency to implement 2/II replacing the current 1/I handling of revisions.
- Site Admin
- Posts: 7210
- Joined: 9 Dec 2007
Related problem: How should "Cleanup after x days" handle the job name; it might not exist or it might change over time when the user renames config => leave out from temp name?
- Posts: 2450
- Joined: 22 Aug 2012
For me the side advantage of 1.
(you know exactly where to find all available versions of a file via a file browser)
is one of the prime reasons to use FFS. I would hate to see that dropped.
If 1. is tied to I and 2. is tied to II, why not give the choice (option) which of the
two combination to use? It seems to serve the two "camps" of users.
Plerry
(you know exactly where to find all available versions of a file via a file browser)
is one of the prime reasons to use FFS. I would hate to see that dropped.
If 1. is tied to I and 2. is tied to II, why not give the choice (option) which of the
two combination to use? It seems to serve the two "camps" of users.
Plerry
- Site Admin
- Posts: 7210
- Joined: 9 Dec 2007
I need to know how big the "camps" are... if one turns out to be small...
- Site Admin
- Posts: 7210
- Joined: 9 Dec 2007
New idea: have FFS by default not apply *any* naming convention at all. If a file is deleted the second time, it will just replace the old file in the "user defined directory". Additionally offer checkbox "keep multiple versions" with spincontrol "number of versions".
If one wants to keep an unlimited number of versions of the v5.10 style he sets the spin control to "0", or a positive finite number to impose a limit.
If he wants to have the old naming convention, he can keep the checkbox unchecked and set the custom directory to "C:\some dir\%timestamp%"
Drawback: It's not possible to delete old versions after x days.
New benefit:
1. It is possible to only keep the last version of all files, directly in the directory specified - without any naming convention.
2. It is possible to limit this behavior to only keep last versions per day:
"C:\some dir\%date%"
or week:
"C:\some dir\%weekday%"
If one wants to keep an unlimited number of versions of the v5.10 style he sets the spin control to "0", or a positive finite number to impose a limit.
If he wants to have the old naming convention, he can keep the checkbox unchecked and set the custom directory to "C:\some dir\%timestamp%"
Drawback: It's not possible to delete old versions after x days.
New benefit:
1. It is possible to only keep the last version of all files, directly in the directory specified - without any naming convention.
2. It is possible to limit this behavior to only keep last versions per day:
"C:\some dir\%date%"
or week:
"C:\some dir\%weekday%"
- Site Admin
- Posts: 7210
- Joined: 9 Dec 2007
New idea: have no limit to begin with, but three naming conventions to choose from:
1. Session: the old 5.4 convention, but without the job name: YYYY-MM-DD hhmmss\folder\file.txt
2. Versioning: 5.10: folder\file.txt YYYY-MM-DD hhmmss.txt
3. Overwrite: folder\file.txt
1 offers the semantics of recycle bin, FIFO, albeit it needs to be enforced manually.
2. software versioning: scenario: many small files, no need to apply any limit, size is not a constraint, requirement to easily find old versions via explorer
3. kitchen sink: default semantics is overwrite the file if it already exists. May be already useful as is, but can be further refined if the users adds macros like %timestamp% to the specified folder.
I think 1 and 3 are quite fundamental, maybe I'm over-engineering 2 a little due to my personal needs.
1. Session: the old 5.4 convention, but without the job name: YYYY-MM-DD hhmmss\folder\file.txt
2. Versioning: 5.10: folder\file.txt YYYY-MM-DD hhmmss.txt
3. Overwrite: folder\file.txt
1 offers the semantics of recycle bin, FIFO, albeit it needs to be enforced manually.
2. software versioning: scenario: many small files, no need to apply any limit, size is not a constraint, requirement to easily find old versions via explorer
3. kitchen sink: default semantics is overwrite the file if it already exists. May be already useful as is, but can be further refined if the users adds macros like %timestamp% to the specified folder.
I think 1 and 3 are quite fundamental, maybe I'm over-engineering 2 a little due to my personal needs.
- Posts: 9
- Joined: 21 May 2004
On 12/24/12 6:11 AM, Zenju wrote:New idea: have no limit to begin with, but three naming conventions to choose from:
1. Session: the old 5.4 convention, but without the job name: YYYY-MM-DD hhmmss\folder\file.txt
2. Versioning: 5.10: folder\file.txt YYYY-MM-DD hhmmss.txt
3. Overwrite: folder\file.txt
1 offers the semantics of recycle bin, FIFO, albeit it needs to be enforced manually.
2. software versioning: scenario: many small files, no need to apply any limit, size is not a constraint, requirement to easily find old versions via explorer
3. kitchen sink: default semantics is overwrite the file if it already exists. May be already useful as is, but can be further refined if the users adds macros like %timestamp% to the specified folder.
I think 1 and 3 are quite fundamental, maybe I'm over-engineering 2 a little due to my personal needs.Zenju
>
> New idea: have no limit to begin with, but three naming conventions to
> choose from:
>
> 1. Session: the old 5.4 convention, but without the job name:
> YYYY-MM-DD hhmmss\folder\file.txt
> 2. Versioning: 5.10: folder\file.txt YYYY-MM-DD hhmmss.txt
> 3. Overwrite: folder\file.txt
>
> 1 offers the semantics of recycle bin, FIFO, albeit it needs to be
> enforced manually.
> 2. software versioning: scenario: many small files, no need to apply
> any limit, size is not a constraint, requirement to easily find old
> versions via explorer
> 3. kitchen sink: default semantics is overwrite the file if it already
> exists. May be already useful as is, but can be further refined if the
> users adds macros like %timestamp% to the specified folder.
>
I'm not familiar with FFS macros, but could each of those choices be
implemented by a macro (maybe through extending the macro "language")?
Another quick thought: could shortcuts/symlinks be useful here?
--
Don Dwiggins
Advanced Publishing Technology
- Site Admin
- Posts: 7210
- Joined: 9 Dec 2007
Option 1 can be implemented in terms of option 3 by using macros. Therefore I'm almost positive to drop 1 altogether. 2 on the other hand is not an extension of 3 with macros.
In general, sure, I could allow the user to specify the full naming convention via "some" macro-like syntax, but the cost would be an inordinate increase in complexity for the user. This is unjustified since only a minor subset of the huge solution space is actually useful for the user. I'm trying to identify this small subset. The second step is to find a good set of base vectors for this subset. Option 3 is such a base vector. Not entirely sure about 2, but combined they span quite same area of this subset.
>could shortcuts/symlinks be useful here?
I don't think so. Any solution needs maximum compatibility, this excludes symlinks. Shortcuts on the other hand are a shell-level concept, this is too high a level, we need a solution at file-system level.
In general, sure, I could allow the user to specify the full naming convention via "some" macro-like syntax, but the cost would be an inordinate increase in complexity for the user. This is unjustified since only a minor subset of the huge solution space is actually useful for the user. I'm trying to identify this small subset. The second step is to find a good set of base vectors for this subset. Option 3 is such a base vector. Not entirely sure about 2, but combined they span quite same area of this subset.
>could shortcuts/symlinks be useful here?
I don't think so. Any solution needs maximum compatibility, this excludes symlinks. Shortcuts on the other hand are a shell-level concept, this is too high a level, we need a solution at file-system level.
- Site Admin
- Posts: 7210
- Joined: 9 Dec 2007
I've finished the v5.11 revision of the versioning topic:
The new beta version can be found here:
[404, Invalid URL: https://sourceforge.net/p/freefilesync/feature-requests/315/]
The new beta version can be found here:
[404, Invalid URL: https://sourceforge.net/p/freefilesync/feature-requests/315/]
- Posts: 2
- Joined: 20 Nov 2012
Thanks for %timestamp%! But...I've finished the v5.11 revision of the versioning topic:
The new beta version can be found here:
[404, Invalid URL: https://sourceforge.net/p/freefilesync/feature-requests/315/]Zenju
For Versioning=Replace lost the folder name of the source. The example in attachment
- Attachments
-
- FileSync_v5.11.zip
- (4.42 KiB) Downloaded 257 times
- Site Admin
- Posts: 7210
- Joined: 9 Dec 2007
> lost the folder name of the source.
This is the usual behavior; files are moved relative to their base synchronization directories.
This is the usual behavior; files are moved relative to their base synchronization directories.