New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
size placeholders for borg list <repo> #2871
Comments
Until recently, the infrastructure needed for this was missing - I added it (to master branch) when I implemented the "comment" placeholder for But, be aware that computing anything that needs to read the whole archive metadata will be slow, esp. if the listing shows many archives and/or repo is accessed over a slow connection. |
|
Interesting that this is possible, but it is largely slow.
Yeah, that's why I said: Can't you just cache this fact? |
Deduplicated size has to be calculated, and cannot be cached — so this will always be kinda slow, though #2764 makes it a fair amount faster. |
Wishing for this feature myself, I slapped this (technically 😉) one-liner together to get an archive list like I'd like it to look:
It uses
With only 39 archives speed is ok, but I guess doing it with |
implemented by #7506 (master / borg2). |
borg list
does only support listing archive, barchive, time and id for an archive. When I e.g. see I need space and want to delete big archives, however, (e.g. because of #2870), I'd like to see an overview over all archive size's to find the largest ones and potentially delete them.I know I could
borg info
each archive, but this takes some time and is inconvenient to do for many archives.Of course, I am only interested in the "deduplicated size" of "this archive".
But if you want, you could also sum up the values (or how you do it) and display a line "size of all archives: XY GB" at the bottom.
Maybe you need to cache this information somehow/somewhere, but this should be possible, as, usually, the size of an archive does not change and so the cache never has to expire (unless an archive is deleted).
The text was updated successfully, but these errors were encountered: