-
Notifications
You must be signed in to change notification settings - Fork 590
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE] Snapshot CRD #3144
Comments
Hey team! Please add your planning poker estimate with ZenHub @jenting @shuo-wu @PhanLe1010 20 points seems to be the max for the planning poker, but this requires quite a bit of work. (Engine, Replica, Manager modifications + refactoring along the way) |
Test Plan:Innit
Create
Delete
Others
|
Pre Ready-For-Testing Checklist
|
HI @PhanLe1010 , I have quick questions about test plan steps Create -> 2 -> iii , is the snapshot name snap001 typo in command In step Delete -> 3 -> ii, snapshot snap002 not next to volume-head because previously we tested recurring jobs with 5 retains, so there should be recurring job snapshots after to snap002, could you change the test steps let steps more smoothly? thank you |
Yeah, it is a typo. Fixed it. Thank you
Done. Thank you |
Validate on master-head 20220519 Follow test steps, below snapshot behavior in different scenarios all working as expected Create
Delete
Others
|
Is your feature request related to a problem? Please describe.
Snapshot RecurringJobs set to Retain X number of snapshots do not touch unrelated snapshots, so if one ever changes the name of the RecurringJob, the old snapshots will stick around forever. These then have to be manually deleted in the UI.
Having a CRD for snapshots would greatly simplify this, as one could prune snapshots using kubectl - much like how one can currently manage backups using kubectl due to the existance of the backups.longhorn.io CRD.
Describe the solution you'd like
A CRD for Snapshots. Something like snapshots.longhorn.io, and code in longhorn-manager to ensure that there exists a CRD object for each snapshot for each volume, and that they are in sync.
Describe alternatives you've considered
I suppose a browser automation framework might also work for pruning large numbers of snapshots, but this feels janky as all hell.
Additional context
Screenshot of v1.2.2 UI showing leftover snapshots from a deleted recurringjob, and a single snapshot from a new RecurringJob with retain set to 4:
https://i.imgur.com/SnlP23P.png
The text was updated successfully, but these errors were encountered: