You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am new to Amazon Glacier and bakthat. Bakthat seems to have a lot of nice features and I thank you for writing this software. I did one test backup and reviewed many of the options and I have a recommendation for an improvement. When backing up directories it is great that they are tar.gz'ed automatically, yet the contents of the archive are unknown. It would be useful if the files associated with an archive were stored in the sqlite database. It could be as simple as a json blob with the files (possibly with timestamps, size, perms, etc.) added to the archive or in another table associated to the archive. Right now this is my only reservation with using Glacier for a bulk backup because I want to be sure I know that the version of the files I want are in the archive I restore. I am a Python developer and am willing to try to incorporate this feature, but I figured I may have overlooked something obvious or this improvement may be trivial to add by someone who has more familiarity with the code.
Thanks!
The text was updated successfully, but these errors were encountered:
Thanks for your feedback, I think it's a good idea to keep track of the files in each archives, I'll consider adding this feature. I think this should be an optional features but I like the idea.
Currently I'm working on a total rewrite of Bakthat from scratch but it still need some work but I hope releasing it soon, I keep the issue open and I keep you updated.
I am new to Amazon Glacier and bakthat. Bakthat seems to have a lot of nice features and I thank you for writing this software. I did one test backup and reviewed many of the options and I have a recommendation for an improvement. When backing up directories it is great that they are tar.gz'ed automatically, yet the contents of the archive are unknown. It would be useful if the files associated with an archive were stored in the sqlite database. It could be as simple as a json blob with the files (possibly with timestamps, size, perms, etc.) added to the archive or in another table associated to the archive. Right now this is my only reservation with using Glacier for a bulk backup because I want to be sure I know that the version of the files I want are in the archive I restore. I am a Python developer and am willing to try to incorporate this feature, but I figured I may have overlooked something obvious or this improvement may be trivial to add by someone who has more familiarity with the code.
Thanks!
The text was updated successfully, but these errors were encountered: