New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Insufficient permissions for RBAC role system:kube-controller-manager #48208

Closed
antoineco opened this Issue Jun 28, 2017 · 6 comments

Comments

Projects
None yet
4 participants
@antoineco
Contributor

antoineco commented Jun 28, 2017

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:

The kube-controller-manager component is working in a degraded mode inside a cluster where the authorization mode is RBAC.

  • X.509 authentication is used within that cluster and kube-controller-manager is using a certificate with the following fields:
Subject: O=kubernetes, OU=kubernetes, CN=system:kube-controller-manager
  • The --use-service-account-credentials flag is not enabled.

  • The kube-controller-manager logs reveal errors like:

event.go:217] Event(v1.ObjectReference{\
 Kind:"ReplicaSet", \
 Namespace:"kube-system", \
 Name:"kube-dns-1759312207", \
 UID:"2474cda5-5c01-11e7-84a7-fa163e6dacf0", \
 APIVersion:"extensions", \
 ResourceVersion:"371", \
 FieldPath:""}): \
  type: 'Warning' \
  reason: 'FailedCreate' \
  Error creating: User "system:kube-controller-manager" cannot create pods in the namespace "kube-system". (post pods)
replica_set.go:424] Sync "kube-system/kube-dns-1759312207" failed with \
 User "system:kube-controller-manager" cannot get replicasets.extensions \
 in the namespace "kube-system". (get replicasets.extensions kube-dns-1759312207)
  • The kube-apiserver audit log reveals errors like:
AUDIT: id="281bba2c-ec76-4c8e-8aa7-9d90f5fbc89f" \
 ip="::1" \
 method="POST" \
 user="system:kube-controller-manager" \
 groups="\"kubernetes\",\"system:authenticated\"" \
 as="<self>" \
 asgroups="<lookup>" \
 namespace="kube-system" \
 uri="/api/v1/namespaces/kube-system/pods"
AUDIT: id="281bba2c-ec76-4c8e-8aa7-9d90f5fbc89f" \
 response="403"

What you expected to happen:

The system:kube-controller-manager ClusterRole should allow kube-controller-manager to perform all the actions it is supposed to.

How to reproduce it (as minimally and precisely as possible):

  • Start kube-controller-manager without --use-service-account-credentials
  • Authenticate kube-controller-manager with a X.509 certificate where the CN field equals system:kube-controller-manager (not tested with Token or Basic auth).

Environment:

  • Kubernetes version: 1.6.6
  • Cloud provider or hardware configuration: OpenStack
  • OS: Container Linux by CoreOS 1409.2.0
  • Kernel: 4.11.6-coreos
  • Install tools: custom
  • Others:
@k8s-merge-robot

This comment has been minimized.

Contributor

k8s-merge-robot commented Jun 28, 2017

@antoineco There are no sig labels on this issue. Please add a sig label by:
(1) mentioning a sig: @kubernetes/sig-<team-name>-misc
e.g., @kubernetes/sig-api-machinery-* for API Machinery
(2) specifying the label manually: /sig <label>
e.g., /sig scalability for sig/scalability

Note: method (1) will trigger a notification to the team. You can find the team list here and label list here

@antoineco

This comment has been minimized.

Contributor

antoineco commented Jun 28, 2017

/sig auth

@antoineco

This comment has been minimized.

Contributor

antoineco commented Jun 28, 2017

incomplete - work in progress


Missing permissions

deployment/replicaset scaling

apiGroups resources verbs
extensions replicasets/status, deployments/status update
extensions replicasets get, update
"" pods create, delete

deployment rollout

apiGroups resources verbs
extensions replicasets create, delete

node management

apiGroups resources verbs
"" nodes patch

misc

apiGroups resources verbs
"" endpoints, limitranges, podtemplates list, watch
extensions networkpolicies, ingresses, thirdpartyresources list, watch
settings.k8s.io podpresets list, watch
rbac.authorization.k8s.io roles, rolebindings, clusterroles, clusterrolebindings list, watch
@antoineco

This comment has been minimized.

Contributor

antoineco commented Jul 1, 2017

Correct me if I'm wrong but it seems like this behaviour is intentional:

policy.go

{Name: "system:kube-controller-manager"}
a role to use for bootstrapping the kube-controller-manager so it can create the shared informers service accounts, and secrets that we need to create separate identities for other controllers

If that's the case, I believe the documentation should mention that the default system:kube-controller-manage ClusterRole has limited permissions ("Allows access to the resources required by the kube-controller-manager component." is misleading), and make it more clear that enabling --use-service-account-credentials is recommended in order for the controller-manager to operate in normal conditions.

There are two other options I thought about for improving the usability of RBAC with controller-manager:

  • Make the --use-service-account-credentials flag opt-out instead of opt-in.
  • Introduce a default ClusterRole, eg. system:kube-controller-manager:full, which combines the rules from all the system:controller:* ClusterRoles.
@liggitt

This comment has been minimized.

Member

liggitt commented Jul 17, 2017

yes, it is intentional. the docs were updated to clarify the permissions for individual controllers are in individual roles, and that those roles must be granted to the controller manager if not running with --use-service-account-credentials.

Make the --use-service-account-credentials flag opt-out instead of opt-in.

we can't do that for compatibility reasons (someone running their controller manager from a single credential they had granted sufficient permissions to in a previous release would be broken if they are not using RBAC)

Introduce a default ClusterRole, eg. system:kube-controller-manager:full, which combines the rules from all the system:controller:* ClusterRoles.

We don't want to encourage this approach by defining a default role for it

@liggitt

This comment has been minimized.

Member

liggitt commented Jul 17, 2017

@liggitt liggitt closed this Jul 17, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment