-
Notifications
You must be signed in to change notification settings - Fork 38.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handle node label updates and deletions #18394
Comments
cc @mml |
Modeling kubelet-specified labels via state change is confusing and error prone. Can't the kubelet simply tell the API server "I have these labels", and the API server keeps track of the source of each label? Then any kubelet-sourced label that the kubelet fails to mention in a future API server call is effectively deleted. |
@mml: AFAIK kubelet is just another client to the API server. Even if we were to have separate fields in the API for kubelet and cluster generated labels, there is no means to prevent the user from updating kubelet owned labels. @mikedanese might be able to provide a bit more context here. |
when did the kubelet start owning labels? |
The initial discussion of introducing kubelet owned labels is at: #12090 (comment) |
@vishh |
Issues go stale after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@liggitt, @mikedanese |
/milestone v1.19 https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/0000-20170814-bounding-self-labeling-kubelets.md#implementation-timeline has the last step of label restrictions being implemented in 1.19 |
👋 Hello from Bug Triage team! Wanted to follow up on this issue with a friendly reminder that the code freeze for 1.19 is starting June 25th (about 4 weeks from now). As this issue is tagged for 1.19, is it still planned for this release? |
/remove-sig auth NodeAuthorization admission protection of node labels is complete in 1.19 |
Updated KEP link: https://github.com/kubernetes/enhancements/blob/0e4d5df19d396511fe41ed0860b0ab9b96f46a2d/keps/sig-auth/279-limit-node-access/README.md kubernetes/enhancements#279 /sig auth It appears that this went stable in 1.19 and was marked as implemented in #90307 and kubernetes/enhancements#1737 /close |
@ehashman: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Commented in #61659 (comment) The permissions aspect was a blocker for this and was resolved by https://github.com/kubernetes/enhancements/blob/0e4d5df19d396511fe41ed0860b0ab9b96f46a2d/keps/sig-auth/279-limit-node-access/README.md This is referring to the second part described in #61659 (comment) (the lifecycle bit)
|
/remove-sig auth Per Jordan, this belongs to sig-node now as sig-auth has completed the work on our end. |
/triage accepted |
This issue has not been updated in over 1 year, and should be re-triaged. You can:
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/ /remove-triage accepted |
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
As of now node labels can be updated either via the API server directly or through the kubelet.
Inside of the kubelet, labels can be set directly from inside of kubelet, or by using command line flags and localhost files.
Ownership of labels is not explicit. As of now a label set via the kubelet can be updated through the API server directly. If the kubelet restarts, it will update those labels again.
Kubernetes does not specify any policy around label source precedence.
To begin with we can require users to avoid label key conflicts themselves.
We need to fix label ownership though.
Kubelet should be aware of the labels that it has created. It should not update or delete labels that were not originally created through the kubelet. Kubelet needs to keep track of all the labels it has added and delete any labels that were removed across restarts.
In v1.1, the kubelet did not surface any labels. hence forwards and backwards compatibility should not be an issue.
Related issues & PRs: #17265, #17575, #13524
The text was updated successfully, but these errors were encountered: