New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Print out the backup size when listing snapshots (enhancement) #693
Comments
Thanks for the suggestion. What would you expect the size to be? Since all data is deduplicated, a "size" for a particular snapshot is not that easy to determine. Would that be the size of all data referenced in that snapshot? Or the data that was not yet stored in the repo when the snapshot was taken (new data)? |
This is a very good proposal. The number on the right should be the cumulative size of blobs added to the repo. It is the most interesting quantitative parameter of any backup run. How much space did my incremental wasted this night? Oops, it's 10x more than last night, I left some junk somehere (or forgot to put some excludes), I better clean it up. ;) |
+1 for @zcalusic suggestion |
The problem with the size of "new" blobs (added by that particular snapshot) becomes less relevant over time, because those blobs will be referenced by later snapshots. In addition, when earlier snapshots are removed, the number of blobs referenced by a particular snaphot will grow. I think it's valuable to print this information right after the backup is complete, and we can also record it in the snapshot data structure in the repo. I've planned to add some kind of 'detail' view for a particular snapshot, and I think it is a good idea to display the number and size of new blobs there, but in the overview (command |
i was instantly reminded of the statistics flag of rdiff-backup (see https://www.systutorials.com/docs/linux/man/1-rdiff-backup-statistics/ ). sometimes it's nice to see some sort of delta between 2 snapshots. |
Indeed, but that's a different thing: It's computed live and compares two snapshots. We may add something like that, but doing that for the |
it could be useful to know the size of the data 'unique' to the snapshot vs the total size (including dedup'd data) of the snapshot. |
IMO it would be quite useful to have an idea of how much extra space was used for a new snapshot. This could be even just physical storage space computed during backup and stored in snapshot's metadata. If some snapshot is removed, this metadata should be then invalidated in all future snapshots. I think i would appreciate such a feature even if nothing else is done in this direction. However, an option of recalculating this "extra size" after some previous backups were removed would also be nice. I think this is what BackupLoupe does for Time Machine on Mac OS. (The deduplication in Time Machine is very basic, but the problem of defining the "size of a snapshot" is the same). |
The most fundamental thing I'd like to know off the bat is how much disk space would the contents of snapshot X consume on the target disk if I restored it. Preferrably I would also be able to get this information for only a subset of the files, e.g. if there was a |
Thanks @rawtaz for pointing me at this issue. I'm storing backups in metered storage (Backblaze B2). I want to know how much new data I'm creating every time I run a backup. It seems like this ought to be easy to calculate during the backup process; I would be happy if restic would simply log that as part of concluding a backup...but it seems like it might also be useful to store this as an attribute of the snapshot (so it can be queried in the future). I am not really interested in anything that requires extensive re-scanning of the repository, since that will simply incur additional charges. |
Any news? |
Hello I would like to second this suggestion. In addition to 'How big would this snapshot be if I restored it' for any existing snapshot and 'how much did this snapshot add' when a snapshot is created, I have a third suggestion: It would also help to be able to answer the question: 'By how much would my repo size reduce if I remove the following snapshot(s)?' This would be useful in |
@dimejo It's done -- just waiting for it to be reviewed/merged. :) |
Jumping on a really old issue here but to me there are 2 important size fields when thinking of snapshots
e.g.
At least then I could tell how much space a single snapshot is using and how much space I need to perform a restore. |
As @fd0 already pointed out, printing the size on every invocation of |
Great idea! Is this enhancement in the queue? The total size of the deduplicated data in the repository would also be helpful in such a synopsis. |
Any update for this feature? It's very useful to be able to see each snapshot size and its restore size. |
+1 |
Not at this point. If there are any updates, it'll show in this issue. |
I'd love to see this as well, particularly as a "sanity check" to see if one particular backup perhaps accidentally added some huge files that I don't need backed up (e.g. because I made a mistake in file exclusion rules). And, if so, to figure out which snapshot that was. Being able to then inspect a snapshot and see just which directory exactly is causing the blowup, is particularly useful. If you can only compare it against "all other backups, past and future", then you can at least use it to find large files that change often and thrash the backup. If you can compare it against "only past snapshots", you can easily discover which file exactly it was, that is causing a particular snapshot to have grown so large. For comparison, here is how Mac OS's TimeMachine does it:
Every such block takes about a few minutes to calculate, on an external USB (spinning 3.5") disk. Rule of thumb on my setup is 1min/GB changed. You can drill down into directories with reasonable speed:
This is not the same as "total file size" (i.e.:
Context, for those unfamiliar with mac's time machine: time machine uses filenames as keys and does no content inspection at all. Renaming a file leads to an entire new copy being stored in the backup. 1 bit changed in a file (and timestamp updated): same, full new copy in the backup (the pathological case for Time Machine is a large sqlite3 file with frequent, small changes). It's got some similarities to rsync, if you squint right. On the plus side, the backup target is a regular(ish) directory, so you can open and inspect it with your regular tools. It would be nice if you could use a (hypothetical) restic equivalent to figure out if it's actually handling that pathological case well. In the case of frequent minor changes to a large sqlite file: is restic actually able to reuse parts of it from previous snapshots? How much? -- or can you already answer this question using existing tools? |
Really would like to emphasize how important this feature is. Regular size checks are a part of backup-reviews to ensure to backup does not suddenly backup nothing / too much, which mostly can bee seen when the backup size goes up or down unreasonable. Thank you for the effort! |
@EugenMayer FYI, if you happen to be running Restic locally, my |
It's understood that calculating the size of a snapshot is expensive. So, adding it to the snapshots command by default is going to make it extremely slow. Still, there may be situations, as the ones described by other people here, where this information would be so important to me that I'd be willing to wait even 2 hours for a result. So, maybe stat (or a slightly simpler version of it) could in fact be added as a flag to the snapshots command, properly documented as something that should be used only when absolutely necessary. Having said that, what most people have asked for here is a lot simpler than that. Already today, restic calculates the size of the snapshot after each backup run. Why not simply add this information as a string to the snapshot in the repo? This information could be then easily added as to the result set of the snapshot command, as an additional column called "Reported snapshot size". Sure, if you're going to implement this now, it will look a bit ugly since older snapshots won't yet have this information. Personally, I'd be fine with it. And thanks for developing restic, it's a great tool. |
Yes, I agree. Any metadata could be added (by restic developers) to a snapshot during creation and reading these metadata should not slow anything down. The thing is, it's been 5 years, so I'm not really holding my breath. |
Should it be recalculated each time older snapshots are removed? |
No. If it's a matter of wording, let's call it "upload size", or anything else. It's just logged information. As with any other log, this information should not change later down the road. |
But after the previous snapshot will be deleted, this information will become meaningless. Worse, unless the previous snapshot hash is recorded with this information, there will be no indication that the recorded information is meaningless. |
I don’t see what value there would be in knowing the upload size, in most cases. I can see a few exceptions. I backup a database driven app that uses block storage, similarish to restic itself in that blocks are added, but a “purge” is only periodic. It would be useful to spot when a big uploaded happened as there is basically no value in deleting intermediary snapshots, but when it does a purge and rebuilds storage blocks, there is suddenly tons of useless data, and spotting an upload size spike would make it easy to delete everything older to get a decent amount of space back. But this is an edge case at best, in most cases the upload size isn’t actually going to provide useful information. Because of restic’s nature, the only thing I can see as useful would be a way to propose a deletion, and get a value of how much space could be released. I’m unclear if this can be calculated in a dry-run, but there was a post here that seemed to suggest maybe? |
I’m also missing this. I would add two columns, though:
|
I'm also interested in this metric. It's useful as a sanity check:
IIUC, it's not a cheap computation to perform in the current repository format, but perhaps the number could be saved on a snapshot when it's created. |
Alex Morega dixit:
> “net size”, i.e. the size of the restore area, were I to restore the
> full snapshot (modulo filesystem cluster size; I’d be fine with just
> adding the individual files’ sizes, or rounding them up to 512 bytes
> or 1/2/4/8 KiB)
I'm also interested in this metric. It's useful as a sanity check:
* Did this snapshot capture what it was supposed to?
* How is the dataset size trending over time?
Right. It’s also easier to just get it for all backups with one
restic snapshot command than to dig out the logs for every individual
box backing up to that repository to search for the line of how many
MiB have been added to the repo with this backup.
Oh, and restic diff should perhaps also be able to show sizes,
perhaps even (with a flag?) for the individual files shown.
And maybe amount of blocks changed?
bye,
//mirabilos
--
„Cool, /usr/share/doc/mksh/examples/uhr.gz ist ja ein Grund,
mksh auf jedem System zu installieren.“
-- XTaran auf der OpenRheinRuhr, ganz begeistert
(EN: “[…]uhr.gz is a reason to install mksh on every system.”)
|
rustic is able to optionally read and display statistical information if it is stored in a snapshot. It also writes this information. E.g.:
The JSON format of the snapshot looks like this:
So, if we decide to let restic save this or some of this information, please use the same JSON attributes! |
Is there a specific reason for not basing the snapshot statistics on the JSON summaryOutput of the backup command? restic/internal/ui/backup/json.go Line 228 in fb5b937
message_type and snapshot_id . That would provide the benefit of having the same format everywhere. The executed command still would have to be stored separately.
|
The reason is, I never used that and wasn't aware of it.
No, in fact we have three times here:
Thanks for the hint about the existing JSON structure, I'll change this in rustic! Additionally I really think that this comparably small change would add a huge benefit to restic too! |
Counting to three is hard ^^ .
My suggestion would be to keep the statistics information in a (sub-)object within the snapshot, if that isn't already the plan.
What's the use case for separately reporting the |
MichaelEischer dixit:
Is there a specific reason for not basing the snapshot statistics on
the JSON summaryOutput of the backup command?
The backup command runs on a different box, so you’d have to collect
its output, copy it to another box somehow, store/archive it somehow,
make it possible to relate that info to that snapshot somehow, and
forget/prune won’t get rid of it either.
(Plus the info changes when previous snapshots are forgotten so it
needs to be partially recalculated then.)
|
I now save in rustic the stats in a
The added fields in comparison to the summaryOutput struct are:
IMO those are all statistically relevant data points worth keeping in a snapshot. The use case to have "backup_start" and "time" is that you can determine the "warm-up time" (i.e. index reading, finding parent, etc). Moreover, you know that data modified before "backup_start" is always contained in the backup. From a backup point of view, "backup_start" is more interesting than the time when the command was started, but as this is already saved in "time", I decided to add another statistical information. |
Actually, I think I forgot a quite important field: The version of the programm called to do the backup. This allows e.g. to identify which snapshots may be affected by some a-posteriori discovered bug or allows to identify snapshots (and tree metadata) which needs migration if changes within these structs are made. |
Output of
restic version
Any.
Expected behavior
Adding an extra column to list the size of the backup (in bytes) can be very useful.
It'll help distinguish between different backups just by checking their size.
Actual behavior
The text was updated successfully, but these errors were encountered: