You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 11, 2023. It is now read-only.
The system:masters RBAC group effectively grants cluster admin level privileges to all nodes on the cluster. This means that if a worker node is compromised, an attacker can gain access to the entire kubernetes API and retrieve secrets for pods running on master nodes for example.
Node authorization tries to reduce the scope of access by making all nodes members of the system:node group instead of the system:masters group. Now each node will uniquely identify itself to the kubernetes API with the following nodename format: system:node:<nodeName>
For example the cert for my agent node k8s-agent-15248279-0 would contain the following credentials:
Common Name: system:node:k8s-agent-15248279-0
Organization: system:nodes
If this node is compromised, it will not be able to retrieve secrets for pods assigned to other nodes. Note that implementing this implies that a 50 node cluster would require 50 node client certs.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contribution. Note that acs-engine is deprecated--see https://github.com/Azure/aks-engine instead.
FEATURE REQUEST
Node Authorization:
The Node authorization admission controller flag is now in master – #1989. However going by the docs here –
https://kubernetes.io/docs/admin/authorization/node/ there are a few other steps required to complete the implementation. At the moment all nodes created by acs-engine use the same client cert https://github.com/Azure/acs-engine/blob/v0.12.0/pkg/acsengine/pki.go#L79-L85 to authenticate against the kubernetes API with the following credentials:-
The
system:masters
RBAC group effectively grants cluster admin level privileges to all nodes on the cluster. This means that if a worker node is compromised, an attacker can gain access to the entire kubernetes API and retrieve secrets for pods running on master nodes for example.Node authorization tries to reduce the scope of access by making all nodes members of the
system:node
group instead of thesystem:masters
group. Now each node will uniquely identify itself to the kubernetes API with the following nodename format:system:node:<nodeName>
For example the cert for my agent node k8s-agent-15248279-0 would contain the following credentials:
If this node is compromised, it will not be able to retrieve secrets for pods assigned to other nodes. Note that implementing this implies that a 50 node cluster would require 50 node client certs.
A reference implementation is available here: https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/4ca7c4504612d55d9c42c21632ca4f4a0e9b4a52/docs/04-certificate-authority.md#the-kubelet-client-certificates
Perhaps a PR is in the works to complete the implementation, but I thought I should raise/track it here.
The text was updated successfully, but these errors were encountered: