-
-
Notifications
You must be signed in to change notification settings - Fork 170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Snapshot retention policy (instances and storage volumes) #51
Comments
Hi! I'm a UT student working on a group project and would love to take on this issue if it’s still available |
Hey @benginow, This issue is currently marked as a maybe as it's not clear whether what's requested here can actually be made to fit Incus' design. So I wouldn't recommend starting with this particular issue :) |
I'd be happy to discuss this in more detail if there's anything about this request that could/should be tweaked to make it align better with Incus' design |
So in general, I'm definitely not in favor of adding support for creating metadata-only snapshots as without something immediately filling them it effectively leads to a broken database state which may prevent Incus from performing needed data transitions on update or even routine background tasks that span all instances and their snapshots. If you absolutely want to go down that path, your best bet is to basically do:
That should result in the same thing but without us having to actually support it ;) Snapshot retention is also a bit tricky because our fundamental design is based around each snapshot having an expiry date. That's a bit different from backup systems where that's often not true and instead rely on an overall snapshot retention policy to consider the current snapshot set and trim it based on the policy. It'd be interesting to figure out if given say:
It would be possible to already assign an expiry date on the snapshot at the time it's created by basically looking at the available snapshots at the time a new snapshot is created to determine how long it should be kept overall. If that's possible to do and leads to a useful result, then I think we could implement that. If not, then the best option would be for 3rd party backup tools that want to manage snapshots and backups on Incus to trigger expiry-less snapshots themselves and then delete them based on their own policy. |
I think this would work. The only case this doesn't cover is differing retention policies across servers. For example, if you host a container with a busy database, you might not want to keep any snapshots from more than a couple days ago on the running Incus server itself due to the size of the snapshots, but the backup server (which may have more space), could keep additional snapshots. Would these
I think the problem with this route, at least historically, is if you delete the ZFS snapshot out from under Incus (e.g. sanoid cleans up a snapshot and then tries to run |
Those proposed config keys would be instance settings, you could make them apply to multiple instances through profiles. For the One thing that I think we could do is I'd consider |
This would be great if possible (and useful even irrespective of this feature request). The proposed instance config keys along with |
inc snapshot create --metadata-only
flag for fine-grained snapshot management
Updated the issue to focus on those proposed additional config keys for both instances and storage volumes:
As mentioned above, those will effectively have to conflict with |
I've filed #745 for the other part of this issue. The last thing mentioned here which are issues with deleting snapshots from Incus when the underlying storage-level snapshot is gone already should be filed separately if that still occurs today. |
Thank you!
I just tested and am unable to recreate this issue on Incus 6.0 LTS so no further action is needed for this. |
Nice feature ! I'm doing this via cron script every day ! I had a nice to have feature for this retention policy : after delete account a customer has a legal delay to ask for their data so we must keep these data for several months. In Ceph deleting a volume on a mirror system causes a delete on the destination mirror (and stop mirroring is deleting image also). incus delete image by default. It might be nice to have a switch to move to trash policy on delete. Trash in ceph has a policy to remove from trash after XXX days (so it's perfect !). Difference is : "rbd image rm" ... became "rbd trash mv ... " . |
Yeah, we may be able to have the Ceph driver do that, we'd just need to validate Ceph's behavior when a name gets reused, basically on how to name the trash entries so conflicts can be avoided. |
With #9 making significant changes to the structure of the
inc snapshot
command (and there being more of a possibility of breaking backwards compatibility), I'd like to suggest a feature to increase snapshot retention granularity. The following use cases warrant maintaining a history of snapshots, but not at the same duration or granularity as the server where the incus container/VM is actually running:Backup Server: you may want to keep a variety of snapshots going back several months (or even years) but you need a variable level of granuarlity as you go farther back in time (or storage space and the sheer number of snapshots will become unmanagable). For example, let's say you take snapshots every hour. You may want a lot of granularity in recent snapshots, say the last 10 hourly snapshots, but then you only need 2 snapshots from last week, 1 snapshot from last month, and 1 from last year. Sanoid lets you configure this type of variable-granularity-based-on-timescale and it would be great if we could do this with incus too
Warm Spare: it's desirable to maintain a warm spare server with a copy of the incus container/VM data: in case the active server goes down, the guest can quickly be brought back online on the warm spare. This is discussed briefly here, but utilizing syncoid and ZFS snapshots to only send the updated delta means that keeping the warm spare up-to-date is easy and doesn't require re-syncing the whole dataset. I think incus supports copying the guest with the
--refresh
flag, but again you have the problem of not being able to maintain a separate snapshot retention policy on the warm spareSanoid now supports pre-snapshot, post-snapshot, and post-prune hooks so if incus had a command like
incus snapshot {create,delete} --metadata-only --snapshot-name="xxx"
to only create/delete the database entry but not actually try tozfs snapshot
/zfs destroy
the snapshot, this would work (you could let sanoid be in charge of creating and deleting the actual ZFS snapshots).Is it possible to add the above type of enhanced functionality for snapshots to incus, or provide some type of integration with 3rd-party tools like sanoid/syncoid?
The text was updated successfully, but these errors were encountered: