Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
apiextensions apiserver: update storage version for custom resources #96403
apiextensions apiserver: update storage version for custom resources #96403
Changes from all commits
2aa2965
ac6a4ea
5ff32e9
2c902a4
5faab59
35d9ffc
8661f73
3cd443e
6ddc1eb
100aa3b
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
Large diffs are not rendered by default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does this case mean that another CRD state in time is queued already and will be used to update the storage version, but we still claim that it got processed? How do we guarantee consistency? Would rather expect to put just one update per UID into the queue, and replace the unprocessed update with a more up-to-date one, or even merge them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In theory it's possible that a CRD's storage version changes from v1 to v2, and changes back to v1, while serving CR write requests in each stage. We cannot just update the storage version to v1 and unblock all the write requests, because the storage migrator won't be able to tell that v2 data exists in etcd. Therefore, there can be more than one update for a UID in the queue at some point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A possible optimization is to merge consecutive updates triggered by watch event handlers. I added a TODO
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't we need any locking?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the caller (crd handler) had a lock. Added a separate lock to the manager for better isolation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this need panic handling. Otherwise updates for a UID can be blocker forever (until process restart).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no locking?