Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm 2.2.3 not working properly with kubeadm 1.6.1 default RBAC rules #2224

Closed
IronhandedLayman opened this issue Apr 6, 2017 · 41 comments · Fixed by #2761
Closed

Helm 2.2.3 not working properly with kubeadm 1.6.1 default RBAC rules #2224

IronhandedLayman opened this issue Apr 6, 2017 · 41 comments · Fixed by #2761
Assignees
Labels

Comments

@IronhandedLayman
Copy link

When installing a cluster for the first time using kubeadm v1.6.1, the initialization defaults to setting up RBAC controlled access, which messes with permissions needed by Tiller to do installations, scan for installed components, and so on. helm init works without issue, but helm list, helm install, and so on all do not work, citing some missing permission or another.

A work-around for this is to create a service account, add the service account to the tiller deployment, and bind that service account to the ClusterRole cluster-admin. If that is how it should work out of the box, then those steps should be part of helm init. Ideally, a new ClusterRole should be created based on the privileges of the user instantiating the Tiller instance, but that could get complicated very quickly.

At the very least, there should be some word in the documentation so that users installing helm using the instructions included within won't be wondering why they can't install anything.

Specific steps for my workaround:

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl edit deploy --namespace kube-system tiller-deploy #and add the line serviceAccount: tiller to spec/template/spec
@seh
Copy link
Contributor

seh commented Apr 6, 2017

Elsewhere I proposed the idea—casually—of adding an option to helm init to allow specifying the service account name that Tiller should use.

@IronhandedLayman
Copy link
Author

@seh do you think that helm init should create a default service account for Tiller, given that RBAC is becoming the default in Kubernetes (and that kubeadm gives you no choice in the matter)?

@technosophos technosophos added this to the 2.4.0 milestone Apr 6, 2017
@seh
Copy link
Contributor

seh commented Apr 7, 2017

I do think that would be useful, but a conscientious administrator is going to want to be able to override that by specifying a service account name too—in which case we should trust that he will take care of ensuring the account exists.

In my Tiller deployment script, I do create a service account called—believe it or not—"tiller," together with a ClusterRoleBinding granting it the "cluster-admin" role (for now).

@technosophos technosophos changed the title Helm 2.2.3 not working properly with kubeadm 1.6.1 default installation Helm 2.2.3 not working properly with kubeadm 1.6.1 default RBAC rules Apr 7, 2017
@technosophos
Copy link
Member

I've done a bunch of testing, now, and I agree with @seh. The right path forward seems to be create the necessary RBAC artifacts during helm init, but give flags for overriding this behavior.

I would suggest that...

  • By default, we create service account, binding, and add that to the deployment
  • We add only the flag --service-account, which, if specified, skips creating the sa and binding, and ONLY modifies the serviceAccount field on Tiller.

Thus, the "conscientious administrator" will be taking upon themselves the task of setting up their own role bindings and service accounts.

@seh
Copy link
Contributor

seh commented Apr 12, 2017

If we create the binding for the service account, presumably we'll create a ClusterRoleBinding granting the "cluster-admin" ClusterRole to Tiller's service account. We should document, though, that it's possible to use Tiller with more restrictive permissions, depending on what's contained in the charts you'll install. In some cases, for a namespace-local Tiller deployment, even the "edit" ClusterRole bound via RoleBinding would be sufficient.

@MaximF
Copy link

MaximF commented Apr 13, 2017

@IronhandedLayman
Thank you for your solution! That finally made helm working with k8s 1.6.
Do you know where exactly is stored config file generated by command kubectl edit deploy --namespace kube-system tiller-deploy?
This command opens a file, which has a line selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/tiller-deploy, however searching for tiller-deploy across the whole file system returns nothing

I'm working on automated installation and trying to bake this last command into Ansible. Any advice would be appreciated! Thanks!

@chancez
Copy link

chancez commented Apr 13, 2017

@MaximF kubectl edit uses a temp file I believe for changes. It queries the API for the current content, then stores that into a temp file, opens it with $EDITOR and when you close the file, it submits the temp file to the API and deletes the file.

If you want to keep everything in CI I suggest you just copy the deployment from the API and use kubectl apply instead of helm init.

@BenHall
Copy link

BenHall commented Apr 14, 2017

Adding a temporary alternate solution for automation and @MaximF

For the Katacoda scenario (https://www.katacoda.com/courses/kubernetes/helm-package-manager), we didn't want users having to use kubectl edit to see the benefit of Helm.

Instead, we "disable" RBAC using the command kubectl create clusterrolebinding permissive-binding --clusterrole=cluster-admin --user=admin --user=kubelet --group=system:serviceaccounts;

Thanks to the Weave Cortex team for the command (cortexproject/cortex#392).

@MaximF
Copy link

MaximF commented Apr 17, 2017

@BenHall after running that I'm getting an error like this on a step helm install PACKAGE:

x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

@technosophos
Copy link
Member

@michelleN I think it makes the most sense to do this in two parts:

  1. Add support for helm init --service-account= NAME. We could try to get this into 2.3.2 to greatly ease people's pain.
  2. Look into creating a default service account and role binding during helm init. That we can get into 2.4.0.

michelleN pushed a commit to michelleN/helm that referenced this issue Apr 28, 2017
michelleN pushed a commit to michelleN/helm that referenced this issue May 1, 2017
michelleN pushed a commit to michelleN/helm that referenced this issue May 1, 2017
michelleN pushed a commit to michelleN/helm that referenced this issue May 1, 2017
@alexbrand
Copy link

Does tiller need cluster-admin permissions? Does it makes sense to maintain/document a least-privileged role that is specific to tiller, which only gives access to the endpoints it needs?

@seh
Copy link
Contributor

seh commented May 2, 2017

That depends wholly on what the charts you install try to create. If they create namespaces, ClusterRoles, and ClusterRoleBindings, then Tiller needs the "cluster-admin" role. If all it does is create, say, ConfigMaps in an existing namespace, then it could get by with much less. You have to tune Tiller to what you want to do with Tiller, or, less fruitfully, vice versa.

@alexbrand
Copy link

Ah, yes. Thanks @seh!

It will really depend on the charts as they might be creating different objects.

@technosophos
Copy link
Member

technosophos commented May 2, 2017

@seh Any chance you could whip up a quick entry in the docs/install_faq.md to summarize the RBAC advice from above?

Helm 2.4.0 will ship (later today) with the helm init --service-account=ACCOUNT_NAME flag, but we punted on defining a default SA/Role. That probably is something people ought to do on their own. Or at least that is our current operating assumption.

@technosophos
Copy link
Member

The critical parts are done. Moving to 2.4.1 to remind myself about docs.

@kachkaev
Copy link

I just faced this again after switching from a kubeadm-controlled k8s to a kops one. Running this:

helm init

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

Then

helm install --name=traefik stable/traefik --set=rbac.enabled=true

The kubeadm-controlled cluster does not return an error, but the kops cluster immediately shows this:

Error: release traefik failed: clusterroles.rbac.authorization.k8s.io "traefik-traefik" is forbidden: attempt to grant extra privileges: [{[get] [] [pods] [] []} {[list] [] [pods] [] []} {[watch] [
] [pods] [] []} {[get] [] [services] [] []} {[list] [] [services] [] []} {[watch] [] [services] [] []} {[get] [] [endpoints] [] []} {[list] [] [endpoints] [] []} {[watch] [] [endpoints] [] []} {[ge
t] [extensions] [ingresses] [] []} {[list] [extensions] [ingresses] [] []} {[watch] [extensions] [ingresses] [] []}] user=&{system:serviceaccount:kube-system:tiller a4668563-6d50-11e7-a489-026256e9
594f [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[clusterroles.rbac.authorization.k8s.io "cluster-admin" not found]

Can this be something to do with how kops cluster is being setup by default? Both clusters' version is

{Server Version: version.Info{Major:"1", Minor:"6",
GitVersion:"v1.6.7",GitCommit:"095136c3078ccf887b9034b7ce598a0a1faff769",GitTreeState:"clean",
BuildDate:"2017-07-05T16:40:42Z",GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}

michelleN pushed a commit to michelleN/helm that referenced this issue Aug 3, 2017
michelleN pushed a commit to michelleN/helm that referenced this issue Aug 3, 2017
michelleN pushed a commit to michelleN/helm that referenced this issue Aug 3, 2017
@rk295
Copy link

rk295 commented Aug 6, 2017

I've also got this error on a kops 1.7 cluster and a minikube 1.7 cluster, both using helm version 2.5.1 on client side and tiller.

I've tried the various suggestions above regarding creating a ServiceAccount, ClusterRoleBinding and patching the tiller deployment, but none of the solutions work and the error message remains the same.

traefik and the nginx-ingress (a local PR I'm working on) charts are exhibiting the same problem. Example error below:

Error: release nginx-ingress failed: clusterroles.rbac.authorization.k8s.io "nginx-ingress-clusterrole" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["get"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["list"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["watch"]} PolicyRule{Resources:["events"], APIGroups:[""], Verbs:["create"]} PolicyRule{Resources:["events"], APIGroups:[""], Verbs:["patch"]} PolicyRule{Resources:["ingresses/status"], APIGroups:["extensions"], Verbs:["update"]}] user=&{system:serviceaccount:kube-system:tiller e18d1467-7a7a-11e7-a9f3-080027e3d749 [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[clusterroles.rbac.authorization.k8s.io "cluster-admin" not found]

Is this likely due to the last line of the error message:

ruleResolutionErrors=[clusterroles.rbac.authorization.k8s.io "cluster-admin" not found]

I'm not particularly au fait with RBAC on k8s - should that role exist? Neither the nginx-ingress or traefik charts make mention of it and a kubectl get sa doesn't show it in my cluster:

% kubectl get sa --all-namespaces
NAMESPACE       NAME      SECRETS   AGE
default         default   1         8d
kube-public     default   1         8d
kube-system     default   1         8d
kube-system     tiller    1         14m
nginx-ingress   default   1         15m

@seh
Copy link
Contributor

seh commented Aug 6, 2017

That's not a ServiceAccount; it's a ClusterRole.

Try the following:
kubectl get clusterroles
kubectl get clusterrole cluster-admin -o yaml

@rk295
Copy link

rk295 commented Aug 6, 2017

Sorry @seh my bad on typing the above, I did check clusterroles as well as serviceaccounts.

The output shown below is from my kops 1.7 cluster, but also on my minikube 1.7 cluster the clusterrole is absent.

% kubectl get clusterroles
NAME                      AGE
kopeio:networking-agent   2d
kops:dns-controller       2d
kube-dns-autoscaler       2d
% kubectl get clusterrole cluster-admin -o yaml
Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "cluster-admin" not found

@seh
Copy link
Contributor

seh commented Aug 6, 2017

Do you have the RBAC authorizer activated? According to the documentation, each time the API server starts with RBAC activated, it will ensure that these roles and bindings are present.

@rk295
Copy link

rk295 commented Aug 6, 2017

Thanks @seh I'll go and do some more digging, that link is very useful.

@vhosakot
Copy link

vhosakot commented Jan 9, 2018

After running helm init, helm list and helm install stable/nginx-ingress caused the following errors for me in kubernentes 1.8.4:

# helm list
Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system"

# helm install stable/nginx-ingress
Error: no available release name found

Thanks to @kujenga! The following commands resolved the errors for me and helm list and helm install work fine after running the following commands:

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

@thatsk
Copy link

thatsk commented Jan 22, 2018

Not working on Kubrnetes V1.9.0
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be4 GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be4 GitTreeState:"clean", BuildDate:"2017-12-15T20:55:30Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
helm version
Client: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}

helm list
Error: Unauthorized

helm install stable/nginx-ingress
Error: no available release name found

@vhosakot
Copy link

@thatsk Those steps worked for me in kubernetes 1.8.4.

@bacongobbler
Copy link
Member

See my reply in #3371.

@rhosisey
Copy link

rhosisey commented Apr 29, 2018

in case you run the command "kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller "
and you get error below :
Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User $username cannot create clusterrolebindings.rbac.authorization.k8s.io at the cluster scope: Required "container.clusterRoleBindings.create" permission.

do the following :

  1. gcloud container clusters describe <cluster_name> --zone
    look for the password and user name in the output and copy it and then run the same command but this time with admin username and password :
  2. kubectl --username="copied username" --password="copied password" create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

@matsonkepson
Copy link

matsonkepson commented Jul 22, 2019

The error message broad me here

Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system"

but actually I found a solution here

https://stackoverflow.com/questions/44349987/error-from-server-forbidden-error-when-creating-clusterroles-rbac-author

Step 1 : Get your identity
gcloud info | grep Account

Will output you something like Account: [kubectl@gserviceaccount.com]

Step 2 : grant cluster-admin to your current identity
kubectl create clusterrolebinding myname-cluster-admin-binding
--clusterrole=cluster-admin
--user=kubectl@gserviceaccount.com

In my case gcloud user (kubectl as service account ) has to be assigned with Owner privileges in the IAM console

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.