New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubeadm: change SystemPrivilegedGroup in apiserve-kubelet-client.crt #121837
kubeadm: change SystemPrivilegedGroup in apiserve-kubelet-client.crt #121837
Conversation
/milestone v1.29 |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: neolit123 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
The authentication and authorization are required when |
right, we need to migrate this to the new kubeadm:cluster-admins group |
FYI: https://kubernetes.io/docs/reference/access-authn-authz/kubelet-authn-authz/
|
The component connection between kube-apiserver and kubelet does not require the "O" field on the Subject to be set to the "system:masters" privileged group. It can be a less privileged group like "kubeadm:cluster-admins". Change the group in the apiserve-kubelet-client certificate specification. This cert is passed to --kubelet-client-certificate.
c45b355
to
2780060
Compare
yeah, i'm testing with kubeadm:cluster-admins now |
/cc @pacoxu |
exec works as expected after using "kubeadm:cluster-admins". $ kubectl exec -it kube-proxy-9htk2 -n kube-system sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
# ls
sh: 1: ls: not found
#
# echo HELLO
HELLO PR updated, release note, description as well. |
It seems that this change depends on the |
i can add it as a task in out e2e test. updated the PR for k/website as well to use kubeadm:cluster-admins: |
The following case may lead to trouble: Regenerate the apiserver-kubelet-client cert for an old v1.28 cluster using the new v1.29 kubeadm mv /etc/kubernetes/pki/apiserver-kubelet-client.crt /etc/kubernetes/pki/apiserver-kubelet-client.crt.bak
mv /etc/kubernetes/pki/apiserver-kubelet-client.key /etc/kubernetes/pki/apiserver-kubelet-client.key.bak
kubeadm init phase certs apiserver-kubelet-client At this point, the O of the certificate has become kubectl get clusterrolebinding kubeadm:cluster-admins
Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "kubeadm:cluster-admins" not found Maybe we should keep both |
i can see it being a problem case, but i wanted to clear the undesired usage of system:masters completely in 1.29. this is also a general problem if they use the 1.29 binary to renew 1.28 admin.conf. EDIT: BTW, this is guaranteed by our support skew of kubeadm against kubeadm which is N-0 - i.e. only use kubeadm N for operations on a cluster that is version N (unless upgrade). |
e2e test update PR: |
/triage accepted
Did we check the version of cluster and kubeadm version before |
i don't believe we do. i don't think it will help, because kubeadm can deploy N-1 control plane. if kubeadm supported the kubeadm N-1 skew it means we must store both old and new group in the Subject. |
Is there other case that may cause a problem like this( use n kubeadm in n-1 cluster)? If the skew is n-0, we may add some version checks when it may trigger a bad result. |
one that comes to mind is when the API version in kubeadm-config is changed to a new version. but an old version of kubeadm tries to join. it will not know how to decode the new API version in the config map.
adding such checks might be a bit tricky, especially for certs like i mentioned above. it seems that users do not make this mistake to use old vs new kubeadm, but we cannot exclude the possibility. |
/lgtm we may need more review :) |
LGTM label has been added. Git tree hash: 2ab198fc4dc50934698a14cdd97a21297dbcefd3
|
i think this would work fine, we just need to sort the renewal that you did in your PR follow-up as well. TBH, https://kubernetes.io/docs/reference/access-authn-authz/kubelet-authn-authz/ tells us to sign a cert with a completely custom user/group RBAC, but that's yet more RBAC management and we can just try using the "kubeadm:cluster-admins" group. ca.key is there, but users can decide to use the external CA path and sign custom certs instead of relying on the KCM to sign certs for new kubelets that join the cluster - i.e. don't keep ca.key on nodes. |
/lgtm |
/hold cancel |
Changelog suggestion kubeadm: changed the group that the `apiserver-kubelet-client.crt` X.509 certificate Subject (indirectly)
specifies. The new value, `kubeadm:cluster-admins`, is less privileged but still allows the API server to
perform all expected actions against nodes.
|
Aside: readers might wonder why kubeadm doesn't make a dedicated group for the API server to act as when managing nodes. However, making that change would be a separate PR. |
What type of PR is this?
/kind bug
What this PR does / why we need it:
The component connection between kube-apiserver and kubelet does not
require the "O" field on the Subject to be set to the
"system:masters" privileged group. It can be a less
privileged group like "kubeadm:cluster-admins".
Change the group in the apiserve-kubelet-client
certificate specification. This cert is passed to
--kubelet-client-certificate.
Which issue(s) this PR fixes:
NONE
Special notes for your reviewer:
NONE
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: