Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
[Suggestion] Loose file archive default naming from contents #237
Wanted to toss this idea from a suggestion I made in another thread, but might have merit as it's own proposal for a new default naming of loose file archives instead of
Create names simply from the leading file or folder (item) and quantity of contents (additional items)…
Allowing for archiving queued batches without losing context of what got batched after (especially when using 'Delete file(s) after compression'). Thoughts?
I think it's getting over complicated to think of it as prominence for a final file name. Forget about prominence. Too complicated and of course that is subjective. Instead of the final end game think of this as simply a replacement for the temporary name "compressed file #.7z" meant for renaming as it is now -- just a replacement that helps you remember what you archived especially in batches... Let me explain:
I think a simple base context is better than an arbitrary generic name. Especially so they can be more easily renamed without guessing after large batch processing. And there are plenty of batch archiving cases where you do no want to archive files contained in folders ahead of time.
The first item alphabetically (plus a top level item count to avoid confusion) gives context so you can rename it without unarchiving it or remembering the order of items before archiving to match the compressed file numbers… Imagine a folder full of items you intend to batch without pre-containing in folders, in ordered segments, top to bottom. Consider the results of that queue:
Currently I have to turn off 'Delete files after compression' or lose context completely upon returning once the batch is complete, so I keep the original order of the files and go through in order, queue up groups of loose files for batching and after compression is compete, methodically name the files, and delete the originals sets, one group at a time in the exact order as I go through compressed file 1 2 3 etc… Currently this is the closest optimized thing to batching archiving files in groups without folders: methodically, as a workaround. Otherwise would take an absurd amount of time to do each group individually.
Just using the alphabetically first item plus a count would at least have some connection to what was archived so a user can rename accordingly without unarchiving or refrain from large batchIng operations altogether. When doing .7z compression on slowest setting on a bunch of large files.. that takes quite a while even on a decently powerful machine when threading up a batch and something I'd rather load up and walk away from and return to in an hour when they are all done. Go through and name them accordingly, personally or however.
This use case is why I requested shared filename inference but I am sure there are use cases where the set of files to archive are not in shared name sets that users wouldn't mind archiving in batches.
I guess I should be referring to the leading file not as alphabetical first but simply the first item processed by Keka, but that would be pending how Keka processes any given list of items dumped on it from the Finder or otherwise in case it respects the list order dragged to it from various sort modes. So leading meaning first of a given list..