Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handle node label updates and deletions #18394

Open
vishh opened this issue Dec 8, 2015 · 31 comments
Open

Handle node label updates and deletions #18394

vishh opened this issue Dec 8, 2015 · 31 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@vishh
Copy link
Contributor

vishh commented Dec 8, 2015

As of now node labels can be updated either via the API server directly or through the kubelet.
Inside of the kubelet, labels can be set directly from inside of kubelet, or by using command line flags and localhost files.

Ownership of labels is not explicit. As of now a label set via the kubelet can be updated through the API server directly. If the kubelet restarts, it will update those labels again.

Kubernetes does not specify any policy around label source precedence.
To begin with we can require users to avoid label key conflicts themselves.

We need to fix label ownership though.

Kubelet should be aware of the labels that it has created. It should not update or delete labels that were not originally created through the kubelet. Kubelet needs to keep track of all the labels it has added and delete any labels that were removed across restarts.

In v1.1, the kubelet did not surface any labels. hence forwards and backwards compatibility should not be an issue.

Related issues & PRs: #17265, #17575, #13524

@vishh
Copy link
Contributor Author

vishh commented Dec 8, 2015

@vishh vishh added this to the v1.2 milestone Dec 8, 2015
@vishh
Copy link
Contributor Author

vishh commented Dec 8, 2015

cc @mml

@mml
Copy link
Contributor

mml commented Dec 8, 2015

Modeling kubelet-specified labels via state change is confusing and error prone. Can't the kubelet simply tell the API server "I have these labels", and the API server keeps track of the source of each label? Then any kubelet-sourced label that the kubelet fails to mention in a future API server call is effectively deleted.

@vishh
Copy link
Contributor Author

vishh commented Dec 8, 2015

@mml: AFAIK kubelet is just another client to the API server. Even if we were to have separate fields in the API for kubelet and cluster generated labels, there is no means to prevent the user from updating kubelet owned labels. @mikedanese might be able to provide a bit more context here.

@liggitt
Copy link
Member

liggitt commented Dec 9, 2015

when did the kubelet start owning labels?

@mikedanese mikedanese added the sig/node Categorizes an issue or PR as relevant to SIG Node. label Dec 9, 2015
@vishh
Copy link
Contributor Author

vishh commented Dec 9, 2015

@liggitt: #17265 and #16866 have introduced kubelet owned labels.

@dchen1107
Copy link
Member

The initial discussion of introducing kubelet owned labels is at: #12090 (comment)

@mqliang
Copy link
Contributor

mqliang commented Dec 25, 2015

@vishh
I propose to move the region/zone information to node.spec, instead of as a label in #19102. Since the region/zone information is get from cloud provider, just like ExternalIDand ProviderID, and making it as label may lead some issue as you mentioned. After all, if we make region/zone information a node label, why not make ExternalIDand ProviderID as label, too?

@dchen1107 dchen1107 modified the milestones: next-candidate, v1.2 Feb 12, 2016
@fejta-bot
Copy link

Issues go stale after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Freeze the issue for 90d with /lifecycle frozen.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 14, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 13, 2018
@mikedanese mikedanese removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Feb 8, 2018
@mikedanese mikedanese added the sig/auth Categorizes an issue or PR as relevant to SIG Auth. label Feb 8, 2018
@liggitt
Copy link
Member

liggitt commented Mar 30, 2018

xref kubernetes/community#911 #61659

@liggitt liggitt added the kind/feature Categorizes issue or PR as related to a new feature. label May 3, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 1, 2018
@mikedanese mikedanese added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 1, 2018
@markjacksonfishing
Copy link

@liggitt, @mikedanese
Bug Triage team here for the 1.18 release. This is a friendly reminder that code freeze is scheduled for 5 March. Is this issue still intended for milestone 1.18?

@liggitt
Copy link
Member

liggitt commented Feb 12, 2020

/milestone v1.19

https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/0000-20170814-bounding-self-labeling-kubelets.md#implementation-timeline has the last step of label restrictions being implemented in 1.19

@k8s-ci-robot k8s-ci-robot modified the milestones: v1.18, v1.19 Feb 12, 2020
@bai
Copy link

bai commented May 29, 2020

👋 Hello from Bug Triage team! Wanted to follow up on this issue with a friendly reminder that the code freeze for 1.19 is starting June 25th (about 4 weeks from now).

As this issue is tagged for 1.19, is it still planned for this release?

@liggitt
Copy link
Member

liggitt commented Jun 2, 2020

/remove-sig auth

NodeAuthorization admission protection of node labels is complete in 1.19

@k8s-ci-robot k8s-ci-robot removed the sig/auth Categorizes an issue or PR as relevant to SIG Auth. label Jun 2, 2020
@liggitt liggitt removed this from the v1.19 milestone Jul 8, 2020
@ehashman
Copy link
Member

Updated KEP link: https://github.com/kubernetes/enhancements/blob/0e4d5df19d396511fe41ed0860b0ab9b96f46a2d/keps/sig-auth/279-limit-node-access/README.md kubernetes/enhancements#279

/sig auth

It appears that this went stable in 1.19 and was marked as implemented in #90307 and kubernetes/enhancements#1737

/close

@k8s-ci-robot k8s-ci-robot added the sig/auth Categorizes an issue or PR as relevant to SIG Auth. label Jun 24, 2021
@k8s-ci-robot
Copy link
Contributor

@ehashman: Closing this issue.

In response to this:

Updated KEP link: https://github.com/kubernetes/enhancements/blob/0e4d5df19d396511fe41ed0860b0ab9b96f46a2d/keps/sig-auth/279-limit-node-access/README.md kubernetes/enhancements#279

/sig auth

It appears that this went stable in 1.19 and was marked as implemented in #90307 and kubernetes/enhancements#1737

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@liggitt liggitt reopened this Jun 24, 2021
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Jun 24, 2021
@liggitt
Copy link
Member

liggitt commented Jun 24, 2021

Commented in #61659 (comment)

The permissions aspect was a blocker for this and was resolved by https://github.com/kubernetes/enhancements/blob/0e4d5df19d396511fe41ed0860b0ab9b96f46a2d/keps/sig-auth/279-limit-node-access/README.md

This is referring to the second part described in #61659 (comment) (the lifecycle bit)

Lifecycle: the labels are added on initial node registration, but are not reconciled in the following scenarios:

  • if removed/changed by another API client
  • if the kubelet is restarted with a different --node-labels arg

Currently, the kubelet lacks information to know which labels on the Node API object came from its --node-labels arg, so it doesn't have information to reconcile this.

@enj enj added this to Needs Triage in SIG Auth Old Jul 6, 2021
@enj
Copy link
Member

enj commented Oct 4, 2021

/remove-sig auth

Per Jordan, this belongs to sig-node now as sig-auth has completed the work on our end.

@k8s-ci-robot k8s-ci-robot removed the sig/auth Categorizes an issue or PR as relevant to SIG Auth. label Oct 4, 2021
@enj
Copy link
Member

enj commented Oct 4, 2021

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Oct 4, 2021
@enj enj removed this from Needs Triage in SIG Auth Old Oct 4, 2021
@k8s-triage-robot
Copy link

This issue has not been updated in over 1 year, and should be re-triaged.

You can:

  • Confirm that this issue is still relevant with /triage accepted (org members only)
  • Close this issue with /close

For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/

/remove-triage accepted

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. and removed triage/accepted Indicates an issue or PR is ready to be actively worked on. labels Feb 8, 2023
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

No branches or pull requests