-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Limit kubelets from updating their own labels when NodeRestriction is enabled #68267
Conversation
cc @vikaschoudhary16 @derekwaynecarr since the plugin watcher looks to be the most immediate source of expanding kubelet self-labeling.
edit: talked with @verult, #67684 is the PR that adds self-labeling to the kubelet for CSI drivers. it is currently behind the PluginWatcher gate, but if that goes to beta, we risk shipping 1.12 with uncontrolled self-updates by kubelets. can we fence that capability of that PR until we can coordinate label controls for kubelets? |
/assign @yujuhong |
Can you explain this more? I want to make sure usability isn't compromised a lot. Poking a hole in RBAC isn't safe today for node level daemons. |
For things the kubelet is expected to self-report topology for (device drivers and CSI come to mind), one possibility would be to have a registration object for the device class or CSI driver that includes the list of label keys that will be used for topology. For example, CSI has the That registration object would not be kubelet-writeable. The node admission plugin could then allow kubelets to self-set labels that were designated as topology labels by one of those registration objects. |
On Fri, Oct 5, 2018 at 8:14 AM Jordan Liggitt ***@***.***> wrote:
Can you explain this more? I want to make sure usability isn't compromised
a lot. Poking a hole in RBAC isn't safe today for node level daemons.
For things the kubelet is expected to self-report topology for (device
drivers and CSI come to mind), one possibility would be to have a
registration object for the device class or CSI driver that includes the
list of label keys that will be used for topology. For example, CSI has the
CSIDriver object which would be a natural place to put that info (and has
benefits beyond authorization... today the CSI provisioner has to pick a
node at random to determine what the topology keys are).
That registration object would not be kubelet-writeable. The node
admission plugin could then allow kubelets to self-set labels that were
designated as topology labels by one of those registration objects.
How is auth managed for the plugins that write this secondary object which
will then get copied over to the Node Object?
… —
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#68267 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGvIKL0sw3N8iZY5W73XIQLsj41f-3p-ks5uh3dlgaJpZM4WaJ0k>
.
|
Either the cluster admin or something at the control plane level would be expected to set those objects up (it could be part of the manifest or add-on that set up the device driver or CSI plugin) |
e8f478d
to
ac091f7
Compare
ac091f7
to
b56a53b
Compare
cc @kubernetes/sig-auth-pr-reviews @kubernetes/sig-node-pr-reviews @kubernetes/sig-storage-pr-reviews |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks reasonable. Who needs to sign off on this?
{ | ||
name: "allow create of my node with labels", | ||
podsGetter: noExistingPods, | ||
attributes: admission.NewAttributesRecord(mynodeObjLabelA, nil, nodeKind, mynodeObj.Namespace, "", nodeResource, "", admission.Create, false, mynode), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/mynodeObj.Namespace/mynodeObjLabelA.Namespace/ here and below? Or just omit it. We are already omitting the name and nodes aren't namespaced. It looks like a number of other test cases are sloppy as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it doesn't actually bother me to pass the empty object namespace in, and the name of all the mynode*
test fixtures is identical, the point of the different ones is to change other fields in the object. will address separately if desired, trying to keep the test diff readable.
/lgtm |
/assign derekwaynecarr tallclair |
kubelet changes look good to me. /lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: derekwaynecarr, liggitt, mikedanese The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest |
Saw this in the release notes. This is very good to see. One thing I don't understand though. To really solve the issue, all usage of labels that could ever have security ramifications now would be required to be prefixed with kubernetes.io/ or k8s.io/ Thanks, |
See https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#noderestriction for a label prefix specifically reserved for this use:
|
Ah. ok. So, the recommendation is in general, "always prefix your node labels with node-restriction.kubernetes.io/"? |
For the labels you don't want kubelets setting, if you want to use the built-in protection. You can also add your own admission checks to protect non-kubernetes labels. |
Ok. Yeah, was just looking for general advice to give to new users to avoid issues. Since its always safe, and doesn't have an extra cost, its "always prefix your node labels with node-restriction.kubernetes.io/" (unless you know what you are doing). Thanks. :) |
@liggitt 'node-role.kubernetes.io/master' and 'node-role.kubernetes.io/node' seems to be used across projects. Can you please give some direction on what do you think about whitelisting this as allowed set of labels ?
Also across kubeadm, kubespray |
Conformance tests will also be broken if this annotation is not used for control-plane nodes and they're unschedulable, eg with taints. See https://github.com/kubernetes/kubernetes/blob/v1.14.1/test/e2e/framework/util.go#L2755-L2760 |
that label is not appropriate for use in a conformance test, as it is not official API and is not maintained by kubernetes components |
opened #76654 to let the e2e/conformance invoker select which nodes they think should be scheduleable, rather than hard-code handling of a label |
Letting arbitrary nodes self-label as a master node is not reasonable. That label is likely to be used in daemonset nodeSelectors running highly-privileged components. |
kubeadm does not set that label using self-labeling, but applies it using the superuser client at cluster startup: |
Implements phase 1 of https://github.com/kubernetes/community/blob/master/keps/sig-auth/0000-20170814-bounding-self-labeling-kubelets.md#implementation-timeline
Docs PR in kubernetes/website#10944
This PR:
node-restriction.kubernetes.io
on their Node objectskubernetes.io
labels kubelets can set when updating their Node objects to an allowed set of labels and label prefixeskubernetes.io
labels outside that set are passed via--node-labels
(will escalate to an error in v1.15)kubernetes.io
labels outside that set are passed on Node creation (will escalate to a forbidden error in v1.17)/sig auth
/sig node
/sig storage
/cc @mikedanese @verult @saad-ali @vishh