New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm 2.2.3 not working properly with kubeadm 1.6.1 default RBAC rules #2224

Closed
IronhandedLayman opened this Issue Apr 6, 2017 · 40 comments

Comments

Projects
None yet
@IronhandedLayman

IronhandedLayman commented Apr 6, 2017

When installing a cluster for the first time using kubeadm v1.6.1, the initialization defaults to setting up RBAC controlled access, which messes with permissions needed by Tiller to do installations, scan for installed components, and so on. helm init works without issue, but helm list, helm install, and so on all do not work, citing some missing permission or another.

A work-around for this is to create a service account, add the service account to the tiller deployment, and bind that service account to the ClusterRole cluster-admin. If that is how it should work out of the box, then those steps should be part of helm init. Ideally, a new ClusterRole should be created based on the privileges of the user instantiating the Tiller instance, but that could get complicated very quickly.

At the very least, there should be some word in the documentation so that users installing helm using the instructions included within won't be wondering why they can't install anything.

Specific steps for my workaround:

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl edit deploy --namespace kube-system tiller-deploy #and add the line serviceAccount: tiller to spec/template/spec
@seh

This comment has been minimized.

Show comment
Hide comment
@seh

seh Apr 6, 2017

Collaborator

Elsewhere I proposed the idea—casually—of adding an option to helm init to allow specifying the service account name that Tiller should use.

Collaborator

seh commented Apr 6, 2017

Elsewhere I proposed the idea—casually—of adding an option to helm init to allow specifying the service account name that Tiller should use.

@IronhandedLayman

This comment has been minimized.

Show comment
Hide comment
@IronhandedLayman

IronhandedLayman Apr 6, 2017

@seh do you think that helm init should create a default service account for Tiller, given that RBAC is becoming the default in Kubernetes (and that kubeadm gives you no choice in the matter)?

IronhandedLayman commented Apr 6, 2017

@seh do you think that helm init should create a default service account for Tiller, given that RBAC is becoming the default in Kubernetes (and that kubeadm gives you no choice in the matter)?

@technosophos technosophos added this to the 2.4.0 milestone Apr 6, 2017

@seh

This comment has been minimized.

Show comment
Hide comment
@seh

seh Apr 7, 2017

Collaborator

I do think that would be useful, but a conscientious administrator is going to want to be able to override that by specifying a service account name too—in which case we should trust that he will take care of ensuring the account exists.

In my Tiller deployment script, I do create a service account called—believe it or not—"tiller," together with a ClusterRoleBinding granting it the "cluster-admin" role (for now).

Collaborator

seh commented Apr 7, 2017

I do think that would be useful, but a conscientious administrator is going to want to be able to override that by specifying a service account name too—in which case we should trust that he will take care of ensuring the account exists.

In my Tiller deployment script, I do create a service account called—believe it or not—"tiller," together with a ClusterRoleBinding granting it the "cluster-admin" role (for now).

@technosophos technosophos changed the title from Helm 2.2.3 not working properly with kubeadm 1.6.1 default installation to Helm 2.2.3 not working properly with kubeadm 1.6.1 default RBAC rules Apr 7, 2017

@technosophos

This comment has been minimized.

Show comment
Hide comment
@technosophos

technosophos Apr 11, 2017

Member

I've done a bunch of testing, now, and I agree with @seh. The right path forward seems to be create the necessary RBAC artifacts during helm init, but give flags for overriding this behavior.

I would suggest that...

  • By default, we create service account, binding, and add that to the deployment
  • We add only the flag --service-account, which, if specified, skips creating the sa and binding, and ONLY modifies the serviceAccount field on Tiller.

Thus, the "conscientious administrator" will be taking upon themselves the task of setting up their own role bindings and service accounts.

Member

technosophos commented Apr 11, 2017

I've done a bunch of testing, now, and I agree with @seh. The right path forward seems to be create the necessary RBAC artifacts during helm init, but give flags for overriding this behavior.

I would suggest that...

  • By default, we create service account, binding, and add that to the deployment
  • We add only the flag --service-account, which, if specified, skips creating the sa and binding, and ONLY modifies the serviceAccount field on Tiller.

Thus, the "conscientious administrator" will be taking upon themselves the task of setting up their own role bindings and service accounts.

@seh

This comment has been minimized.

Show comment
Hide comment
@seh

seh Apr 12, 2017

Collaborator

If we create the binding for the service account, presumably we'll create a ClusterRoleBinding granting the "cluster-admin" ClusterRole to Tiller's service account. We should document, though, that it's possible to use Tiller with more restrictive permissions, depending on what's contained in the charts you'll install. In some cases, for a namespace-local Tiller deployment, even the "edit" ClusterRole bound via RoleBinding would be sufficient.

Collaborator

seh commented Apr 12, 2017

If we create the binding for the service account, presumably we'll create a ClusterRoleBinding granting the "cluster-admin" ClusterRole to Tiller's service account. We should document, though, that it's possible to use Tiller with more restrictive permissions, depending on what's contained in the charts you'll install. In some cases, for a namespace-local Tiller deployment, even the "edit" ClusterRole bound via RoleBinding would be sufficient.

@MaximF

This comment has been minimized.

Show comment
Hide comment
@MaximF

MaximF Apr 13, 2017

@IronhandedLayman
Thank you for your solution! That finally made helm working with k8s 1.6.
Do you know where exactly is stored config file generated by command kubectl edit deploy --namespace kube-system tiller-deploy?
This command opens a file, which has a line selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/tiller-deploy, however searching for tiller-deploy across the whole file system returns nothing

I'm working on automated installation and trying to bake this last command into Ansible. Any advice would be appreciated! Thanks!

MaximF commented Apr 13, 2017

@IronhandedLayman
Thank you for your solution! That finally made helm working with k8s 1.6.
Do you know where exactly is stored config file generated by command kubectl edit deploy --namespace kube-system tiller-deploy?
This command opens a file, which has a line selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/tiller-deploy, however searching for tiller-deploy across the whole file system returns nothing

I'm working on automated installation and trying to bake this last command into Ansible. Any advice would be appreciated! Thanks!

@chancez

This comment has been minimized.

Show comment
Hide comment
@chancez

chancez Apr 13, 2017

Contributor

@MaximF kubectl edit uses a temp file I believe for changes. It queries the API for the current content, then stores that into a temp file, opens it with $EDITOR and when you close the file, it submits the temp file to the API and deletes the file.

If you want to keep everything in CI I suggest you just copy the deployment from the API and use kubectl apply instead of helm init.

Contributor

chancez commented Apr 13, 2017

@MaximF kubectl edit uses a temp file I believe for changes. It queries the API for the current content, then stores that into a temp file, opens it with $EDITOR and when you close the file, it submits the temp file to the API and deletes the file.

If you want to keep everything in CI I suggest you just copy the deployment from the API and use kubectl apply instead of helm init.

@BenHall

This comment has been minimized.

Show comment
Hide comment
@BenHall

BenHall Apr 14, 2017

Adding a temporary alternate solution for automation and @MaximF

For the Katacoda scenario (https://www.katacoda.com/courses/kubernetes/helm-package-manager), we didn't want users having to use kubectl edit to see the benefit of Helm.

Instead, we "disable" RBAC using the command kubectl create clusterrolebinding permissive-binding --clusterrole=cluster-admin --user=admin --user=kubelet --group=system:serviceaccounts;

Thanks to the Weave Cortex team for the command (weaveworks/cortex#392).

BenHall commented Apr 14, 2017

Adding a temporary alternate solution for automation and @MaximF

For the Katacoda scenario (https://www.katacoda.com/courses/kubernetes/helm-package-manager), we didn't want users having to use kubectl edit to see the benefit of Helm.

Instead, we "disable" RBAC using the command kubectl create clusterrolebinding permissive-binding --clusterrole=cluster-admin --user=admin --user=kubelet --group=system:serviceaccounts;

Thanks to the Weave Cortex team for the command (weaveworks/cortex#392).

@MaximF

This comment has been minimized.

Show comment
Hide comment
@MaximF

MaximF Apr 17, 2017

@BenHall after running that I'm getting an error like this on a step helm install PACKAGE:

x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

MaximF commented Apr 17, 2017

@BenHall after running that I'm getting an error like this on a step helm install PACKAGE:

x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
@technosophos

This comment has been minimized.

Show comment
Hide comment
@technosophos

technosophos Apr 18, 2017

Member

@michelleN I think it makes the most sense to do this in two parts:

  1. Add support for helm init --service-account= NAME. We could try to get this into 2.3.2 to greatly ease people's pain.
  2. Look into creating a default service account and role binding during helm init. That we can get into 2.4.0.
Member

technosophos commented Apr 18, 2017

@michelleN I think it makes the most sense to do this in two parts:

  1. Add support for helm init --service-account= NAME. We could try to get this into 2.3.2 to greatly ease people's pain.
  2. Look into creating a default service account and role binding during helm init. That we can get into 2.4.0.

michelleN added a commit to michelleN/helm that referenced this issue Apr 28, 2017

michelleN added a commit to michelleN/helm that referenced this issue May 1, 2017

michelleN added a commit to michelleN/helm that referenced this issue May 1, 2017

michelleN added a commit to michelleN/helm that referenced this issue May 1, 2017

@alexbrand

This comment has been minimized.

Show comment
Hide comment
@alexbrand

alexbrand May 2, 2017

Does tiller need cluster-admin permissions? Does it makes sense to maintain/document a least-privileged role that is specific to tiller, which only gives access to the endpoints it needs?

alexbrand commented May 2, 2017

Does tiller need cluster-admin permissions? Does it makes sense to maintain/document a least-privileged role that is specific to tiller, which only gives access to the endpoints it needs?

@seh

This comment has been minimized.

Show comment
Hide comment
@seh

seh May 2, 2017

Collaborator

That depends wholly on what the charts you install try to create. If they create namespaces, ClusterRoles, and ClusterRoleBindings, then Tiller needs the "cluster-admin" role. If all it does is create, say, ConfigMaps in an existing namespace, then it could get by with much less. You have to tune Tiller to what you want to do with Tiller, or, less fruitfully, vice versa.

Collaborator

seh commented May 2, 2017

That depends wholly on what the charts you install try to create. If they create namespaces, ClusterRoles, and ClusterRoleBindings, then Tiller needs the "cluster-admin" role. If all it does is create, say, ConfigMaps in an existing namespace, then it could get by with much less. You have to tune Tiller to what you want to do with Tiller, or, less fruitfully, vice versa.

@alexbrand

This comment has been minimized.

Show comment
Hide comment
@alexbrand

alexbrand May 2, 2017

Ah, yes. Thanks @seh!

It will really depend on the charts as they might be creating different objects.

alexbrand commented May 2, 2017

Ah, yes. Thanks @seh!

It will really depend on the charts as they might be creating different objects.

@technosophos

This comment has been minimized.

Show comment
Hide comment
@technosophos

technosophos May 2, 2017

Member

@seh Any chance you could whip up a quick entry in the docs/install_faq.md to summarize the RBAC advice from above?

Helm 2.4.0 will ship (later today) with the helm init --service-account=ACCOUNT_NAME flag, but we punted on defining a default SA/Role. That probably is something people ought to do on their own. Or at least that is our current operating assumption.

Member

technosophos commented May 2, 2017

@seh Any chance you could whip up a quick entry in the docs/install_faq.md to summarize the RBAC advice from above?

Helm 2.4.0 will ship (later today) with the helm init --service-account=ACCOUNT_NAME flag, but we punted on defining a default SA/Role. That probably is something people ought to do on their own. Or at least that is our current operating assumption.

@technosophos

This comment has been minimized.

Show comment
Hide comment
@technosophos

technosophos May 2, 2017

Member

The critical parts are done. Moving to 2.4.1 to remind myself about docs.

Member

technosophos commented May 2, 2017

The critical parts are done. Moving to 2.4.1 to remind myself about docs.

@weitzj

This comment has been minimized.

Show comment
Hide comment
@weitzj

weitzj May 4, 2017

So. Right now I am binding cluster-admin to a serviceAccount: helm. Should be improved, but here you go:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: helm
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: helm
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: helm
    namespace: kube-system
helm init --service-account helm

weitzj commented May 4, 2017

So. Right now I am binding cluster-admin to a serviceAccount: helm. Should be improved, but here you go:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: helm
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: helm
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: helm
    namespace: kube-system
helm init --service-account helm
@kujenga

This comment has been minimized.

Show comment
Hide comment
@kujenga

kujenga May 8, 2017

To automate the workaround, here's a non-interactive version of the temporary fix described in the first comment here, using patch instead of edit:

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

kujenga commented May 8, 2017

To automate the workaround, here's a non-interactive version of the temporary fix described in the first comment here, using patch instead of edit:

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
@bobbychef64

This comment has been minimized.

Show comment
Hide comment
@bobbychef64

bobbychef64 May 16, 2017

how to find wheather rbac is enabled on k8 cluster or not . I am using the following version
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:33:11Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:22:08Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

bobbychef64 commented May 16, 2017

how to find wheather rbac is enabled on k8 cluster or not . I am using the following version
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:33:11Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:22:08Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

@Bregor

This comment has been minimized.

Show comment
Hide comment
@Bregor

Bregor May 16, 2017

@bobbychef64 $ kubectl api-versions|grep rbac

Bregor commented May 16, 2017

@bobbychef64 $ kubectl api-versions|grep rbac

@bobbychef64

This comment has been minimized.

Show comment
Hide comment
@bobbychef64

bobbychef64 May 16, 2017

Thanks Bregor for your reply . i executed the command the o/p is below
$ kubectl api-versions|grep rbac
rbac.authorization.k8s.io/v1alpha1
rbac.authorization.k8s.io/v1beta1

bobbychef64 commented May 16, 2017

Thanks Bregor for your reply . i executed the command the o/p is below
$ kubectl api-versions|grep rbac
rbac.authorization.k8s.io/v1alpha1
rbac.authorization.k8s.io/v1beta1

@bobbychef64

This comment has been minimized.

Show comment
Hide comment
@bobbychef64

bobbychef64 May 16, 2017

from the above command i am thinking that rbac is enable and started to create a role but i am getting the below error.
$ kubectl create role pod-reader \

--verb=get
--verb=list
--verb=watch
--resource=pods
--namespace=ns-1
Error from server (Forbidden): roles.rbac.authorization.k8s.io "pod-reader" is forbidden: attempt to grant extra privileges: [{[get] [] [pods] [] []} {[list] [] [pods] [] []} {[watch] [] [pods] [] []}] user=&{admin admin [system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[]

bobbychef64 commented May 16, 2017

from the above command i am thinking that rbac is enable and started to create a role but i am getting the below error.
$ kubectl create role pod-reader \

--verb=get
--verb=list
--verb=watch
--resource=pods
--namespace=ns-1
Error from server (Forbidden): roles.rbac.authorization.k8s.io "pod-reader" is forbidden: attempt to grant extra privileges: [{[get] [] [pods] [] []} {[list] [] [pods] [] []} {[watch] [] [pods] [] []}] user=&{admin admin [system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[]

@isansahoo

This comment has been minimized.

Show comment
Hide comment
@isansahoo

isansahoo Jun 1, 2017

When I execute:
kubectl edit deploy --namespace kube-system tiller-deploy

I'm getting below error:
Error from server (NotFound): deployments.extensions "tiller-deploy" not found

Please help me.

isansahoo commented Jun 1, 2017

When I execute:
kubectl edit deploy --namespace kube-system tiller-deploy

I'm getting below error:
Error from server (NotFound): deployments.extensions "tiller-deploy" not found

Please help me.

@kujenga

This comment has been minimized.

Show comment
Hide comment
@kujenga

kujenga Jun 1, 2017

@isansahoo have you run helm init? Check out the quick start guide: https://github.com/kubernetes/helm/blob/master/docs/quickstart.md

kujenga commented Jun 1, 2017

@isansahoo have you run helm init? Check out the quick start guide: https://github.com/kubernetes/helm/blob/master/docs/quickstart.md

@technosophos

This comment has been minimized.

Show comment
Hide comment
@technosophos

technosophos Jun 6, 2017

Member

Just realized that we don't put docs change on patch releases. So bumping to 2.5.0.

Member

technosophos commented Jun 6, 2017

Just realized that we don't put docs change on patch releases. So bumping to 2.5.0.

@technosophos

This comment has been minimized.

Show comment
Hide comment
@technosophos

technosophos Jun 12, 2017

Member

Any update on this for 2.5.0?

Member

technosophos commented Jun 12, 2017

Any update on this for 2.5.0?

@kachkaev

This comment has been minimized.

Show comment
Hide comment
@kachkaev

kachkaev Jul 20, 2017

I just faced this again after switching from a kubeadm-controlled k8s to a kops one. Running this:

helm init

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

Then

helm install --name=traefik stable/traefik --set=rbac.enabled=true

The kubeadm-controlled cluster does not return an error, but the kops cluster immediately shows this:

Error: release traefik failed: clusterroles.rbac.authorization.k8s.io "traefik-traefik" is forbidden: attempt to grant extra privileges: [{[get] [] [pods] [] []} {[list] [] [pods] [] []} {[watch] [
] [pods] [] []} {[get] [] [services] [] []} {[list] [] [services] [] []} {[watch] [] [services] [] []} {[get] [] [endpoints] [] []} {[list] [] [endpoints] [] []} {[watch] [] [endpoints] [] []} {[ge
t] [extensions] [ingresses] [] []} {[list] [extensions] [ingresses] [] []} {[watch] [extensions] [ingresses] [] []}] user=&{system:serviceaccount:kube-system:tiller a4668563-6d50-11e7-a489-026256e9
594f [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[clusterroles.rbac.authorization.k8s.io "cluster-admin" not found]

Can this be something to do with how kops cluster is being setup by default? Both clusters' version is

{Server Version: version.Info{Major:"1", Minor:"6",
GitVersion:"v1.6.7",GitCommit:"095136c3078ccf887b9034b7ce598a0a1faff769",GitTreeState:"clean",
BuildDate:"2017-07-05T16:40:42Z",GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}

kachkaev commented Jul 20, 2017

I just faced this again after switching from a kubeadm-controlled k8s to a kops one. Running this:

helm init

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

Then

helm install --name=traefik stable/traefik --set=rbac.enabled=true

The kubeadm-controlled cluster does not return an error, but the kops cluster immediately shows this:

Error: release traefik failed: clusterroles.rbac.authorization.k8s.io "traefik-traefik" is forbidden: attempt to grant extra privileges: [{[get] [] [pods] [] []} {[list] [] [pods] [] []} {[watch] [
] [pods] [] []} {[get] [] [services] [] []} {[list] [] [services] [] []} {[watch] [] [services] [] []} {[get] [] [endpoints] [] []} {[list] [] [endpoints] [] []} {[watch] [] [endpoints] [] []} {[ge
t] [extensions] [ingresses] [] []} {[list] [extensions] [ingresses] [] []} {[watch] [extensions] [ingresses] [] []}] user=&{system:serviceaccount:kube-system:tiller a4668563-6d50-11e7-a489-026256e9
594f [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[clusterroles.rbac.authorization.k8s.io "cluster-admin" not found]

Can this be something to do with how kops cluster is being setup by default? Both clusters' version is

{Server Version: version.Info{Major:"1", Minor:"6",
GitVersion:"v1.6.7",GitCommit:"095136c3078ccf887b9034b7ce598a0a1faff769",GitTreeState:"clean",
BuildDate:"2017-07-05T16:40:42Z",GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}

michelleN added a commit to michelleN/helm that referenced this issue Aug 3, 2017

michelleN added a commit to michelleN/helm that referenced this issue Aug 3, 2017

michelleN added a commit to michelleN/helm that referenced this issue Aug 3, 2017

@rk295

This comment has been minimized.

Show comment
Hide comment
@rk295

rk295 Aug 6, 2017

I've also got this error on a kops 1.7 cluster and a minikube 1.7 cluster, both using helm version 2.5.1 on client side and tiller.

I've tried the various suggestions above regarding creating a ServiceAccount, ClusterRoleBinding and patching the tiller deployment, but none of the solutions work and the error message remains the same.

traefik and the nginx-ingress (a local PR I'm working on) charts are exhibiting the same problem. Example error below:

Error: release nginx-ingress failed: clusterroles.rbac.authorization.k8s.io "nginx-ingress-clusterrole" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["get"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["list"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["watch"]} PolicyRule{Resources:["events"], APIGroups:[""], Verbs:["create"]} PolicyRule{Resources:["events"], APIGroups:[""], Verbs:["patch"]} PolicyRule{Resources:["ingresses/status"], APIGroups:["extensions"], Verbs:["update"]}] user=&{system:serviceaccount:kube-system:tiller e18d1467-7a7a-11e7-a9f3-080027e3d749 [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[clusterroles.rbac.authorization.k8s.io "cluster-admin" not found]

Is this likely due to the last line of the error message:

ruleResolutionErrors=[clusterroles.rbac.authorization.k8s.io "cluster-admin" not found]

I'm not particularly au fait with RBAC on k8s - should that role exist? Neither the nginx-ingress or traefik charts make mention of it and a kubectl get sa doesn't show it in my cluster:

% kubectl get sa --all-namespaces
NAMESPACE       NAME      SECRETS   AGE
default         default   1         8d
kube-public     default   1         8d
kube-system     default   1         8d
kube-system     tiller    1         14m
nginx-ingress   default   1         15m

rk295 commented Aug 6, 2017

I've also got this error on a kops 1.7 cluster and a minikube 1.7 cluster, both using helm version 2.5.1 on client side and tiller.

I've tried the various suggestions above regarding creating a ServiceAccount, ClusterRoleBinding and patching the tiller deployment, but none of the solutions work and the error message remains the same.

traefik and the nginx-ingress (a local PR I'm working on) charts are exhibiting the same problem. Example error below:

Error: release nginx-ingress failed: clusterroles.rbac.authorization.k8s.io "nginx-ingress-clusterrole" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["get"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["list"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["watch"]} PolicyRule{Resources:["events"], APIGroups:[""], Verbs:["create"]} PolicyRule{Resources:["events"], APIGroups:[""], Verbs:["patch"]} PolicyRule{Resources:["ingresses/status"], APIGroups:["extensions"], Verbs:["update"]}] user=&{system:serviceaccount:kube-system:tiller e18d1467-7a7a-11e7-a9f3-080027e3d749 [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[clusterroles.rbac.authorization.k8s.io "cluster-admin" not found]

Is this likely due to the last line of the error message:

ruleResolutionErrors=[clusterroles.rbac.authorization.k8s.io "cluster-admin" not found]

I'm not particularly au fait with RBAC on k8s - should that role exist? Neither the nginx-ingress or traefik charts make mention of it and a kubectl get sa doesn't show it in my cluster:

% kubectl get sa --all-namespaces
NAMESPACE       NAME      SECRETS   AGE
default         default   1         8d
kube-public     default   1         8d
kube-system     default   1         8d
kube-system     tiller    1         14m
nginx-ingress   default   1         15m
@seh

This comment has been minimized.

Show comment
Hide comment
@seh

seh Aug 6, 2017

Collaborator

That's not a ServiceAccount; it's a ClusterRole.

Try the following:
kubectl get clusterroles
kubectl get clusterrole cluster-admin -o yaml

Collaborator

seh commented Aug 6, 2017

That's not a ServiceAccount; it's a ClusterRole.

Try the following:
kubectl get clusterroles
kubectl get clusterrole cluster-admin -o yaml

@rk295

This comment has been minimized.

Show comment
Hide comment
@rk295

rk295 Aug 6, 2017

Sorry @seh my bad on typing the above, I did check clusterroles as well as serviceaccounts.

The output shown below is from my kops 1.7 cluster, but also on my minikube 1.7 cluster the clusterrole is absent.

% kubectl get clusterroles
NAME                      AGE
kopeio:networking-agent   2d
kops:dns-controller       2d
kube-dns-autoscaler       2d
% kubectl get clusterrole cluster-admin -o yaml
Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "cluster-admin" not found

rk295 commented Aug 6, 2017

Sorry @seh my bad on typing the above, I did check clusterroles as well as serviceaccounts.

The output shown below is from my kops 1.7 cluster, but also on my minikube 1.7 cluster the clusterrole is absent.

% kubectl get clusterroles
NAME                      AGE
kopeio:networking-agent   2d
kops:dns-controller       2d
kube-dns-autoscaler       2d
% kubectl get clusterrole cluster-admin -o yaml
Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "cluster-admin" not found
@seh

This comment has been minimized.

Show comment
Hide comment
@seh

seh Aug 6, 2017

Collaborator

Do you have the RBAC authorizer activated? According to the documentation, each time the API server starts with RBAC activated, it will ensure that these roles and bindings are present.

Collaborator

seh commented Aug 6, 2017

Do you have the RBAC authorizer activated? According to the documentation, each time the API server starts with RBAC activated, it will ensure that these roles and bindings are present.

@rk295

This comment has been minimized.

Show comment
Hide comment
@rk295

rk295 Aug 6, 2017

Thanks @seh I'll go and do some more digging, that link is very useful.

rk295 commented Aug 6, 2017

Thanks @seh I'll go and do some more digging, that link is very useful.

@vhosakot

This comment has been minimized.

Show comment
Hide comment
@vhosakot

vhosakot Jan 9, 2018

After running helm init, helm list and helm install stable/nginx-ingress caused the following errors for me in kubernentes 1.8.4:

# helm list
Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system"

# helm install stable/nginx-ingress
Error: no available release name found

Thanks to @kujenga! The following commands resolved the errors for me and helm list and helm install work fine after running the following commands:

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

vhosakot commented Jan 9, 2018

After running helm init, helm list and helm install stable/nginx-ingress caused the following errors for me in kubernentes 1.8.4:

# helm list
Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system"

# helm install stable/nginx-ingress
Error: no available release name found

Thanks to @kujenga! The following commands resolved the errors for me and helm list and helm install work fine after running the following commands:

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
@thatsk

This comment has been minimized.

Show comment
Hide comment
@thatsk

thatsk Jan 22, 2018

Not working on Kubrnetes V1.9.0
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be4 GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be4 GitTreeState:"clean", BuildDate:"2017-12-15T20:55:30Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
helm version
Client: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}

helm list
Error: Unauthorized

helm install stable/nginx-ingress
Error: no available release name found

thatsk commented Jan 22, 2018

Not working on Kubrnetes V1.9.0
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be4 GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be4 GitTreeState:"clean", BuildDate:"2017-12-15T20:55:30Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
helm version
Client: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}

helm list
Error: Unauthorized

helm install stable/nginx-ingress
Error: no available release name found

@vhosakot

This comment has been minimized.

Show comment
Hide comment
@vhosakot

vhosakot Jan 22, 2018

@thatsk Those steps worked for me in kubernetes 1.8.4.

vhosakot commented Jan 22, 2018

@thatsk Those steps worked for me in kubernetes 1.8.4.

@bacongobbler

This comment has been minimized.

Show comment
Hide comment
@bacongobbler

bacongobbler Jan 22, 2018

Member

See my reply in #3371.

Member

bacongobbler commented Jan 22, 2018

See my reply in #3371.

@rhosisey

This comment has been minimized.

Show comment
Hide comment
@rhosisey

rhosisey Apr 29, 2018

in case you run the command "kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller "
and you get error below :
Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User $username cannot create clusterrolebindings.rbac.authorization.k8s.io at the cluster scope: Required "container.clusterRoleBindings.create" permission.

do the following :

  1. gcloud container clusters describe <cluster_name> --zone
    look for the password and user name in the output and copy it and then run the same command but this time with admin username and password :
  2. kubectl --username="copied username" --password="copied password" create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

rhosisey commented Apr 29, 2018

in case you run the command "kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller "
and you get error below :
Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User $username cannot create clusterrolebindings.rbac.authorization.k8s.io at the cluster scope: Required "container.clusterRoleBindings.create" permission.

do the following :

  1. gcloud container clusters describe <cluster_name> --zone
    look for the password and user name in the output and copy it and then run the same command but this time with admin username and password :
  2. kubectl --username="copied username" --password="copied password" create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment