Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm chart fails to install with RBAC error on GKE #256

Closed
ahmetb opened this issue Jan 16, 2018 · 19 comments
Closed

Helm chart fails to install with RBAC error on GKE #256

ahmetb opened this issue Jan 16, 2018 · 19 comments
Labels

Comments

@ahmetb
Copy link

@ahmetb ahmetb commented Jan 16, 2018

/kind bug

What happened:

  1. helm init
  2. git clone https://github.com/jetstack/cert-manager
  3. cd cert-manager
  4. helm install --name cert-manager --namespace kube-system contrib/charts/cert-manager
  5. See error:

Error: release cert-manager failed: clusterroles.rbac.authorization.k8s.io "cert-manager-cert-manager" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["certificates"], APIGroups:["certmanager.k8s.io"], Verbs:[""]} PolicyRule{Resources:["issuers"], APIGroups:["certmanager.k8s.io"], Verbs:[""]} PolicyRule{Resources:["clusterissuers"], APIGroups:["certmanager.k8s.io"], Verbs:[""]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:[""]} PolicyRule{Resources:["events"], APIGroups:[""], Verbs:[""]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:[""]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:[""]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:[""]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:[""]}] user=&{system:serviceaccount:kube-system:default 6ee23ef4-fb0f-11e7-a397-42010a80014e [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[PolicyRule{Resources:["selfsubjectaccessreviews"], APIGroups:["authorization.k8s.io"], Verbs:["create"]} PolicyRule{NonResourceURLs:["/api" "/api/" "/apis" "/apis/" "/healthz" "/swaggerapi" "/swaggerapi/" "/version"], Verbs:["get"]}] ruleResolutionErrors=[]

What you expected to happen: It to succeed

How to reproduce it (as minimally and precisely as possible):

This is a GKE cluster (version: 1.7.11-gke.1):

gcloud container clusters create certmgrtest

Environment:

  • Kubernetes version (use kubectl version): v1.7.11-gke.1
  • Cloud provider or hardware configuration: GKE
  • Install tools: Helm v2.7.2
@ahmetb ahmetb changed the title Helm chart fails to install Helm chart fails to install with RBAC error on GKE Jan 16, 2018
@ahmetb
Copy link
Author

@ahmetb ahmetb commented Jan 16, 2018

I got it working by adding --set rbac.create=false but I think RBAC-enabled mode should work too.

@munnerz
Copy link
Collaborator

@munnerz munnerz commented Jan 17, 2018

It looks like you may have deployed tiller without RBAC support - the full docs on this are here: https://github.com/kubernetes/helm/blob/master/docs/rbac.md

The tl;dr - you need to grant the tiller service account the cluster-admin role. I usually do this with:

$ kubectl create serviceaccount -n kube-system tiller
$ kubectl create clusterrolebinding tiller-binding --clusterrole=cluster-admin --serviceaccount kube-system:tiller
$ helm init --service-account tiller

This started becoming an issue out of the box with GKE when the default service account in kube-system dropped the cluster-admin role by default (which I guess was 1.7).

EDIT: must be 1.7 as you are on 1.7 😄

@ahmetb
Copy link
Author

@ahmetb ahmetb commented Jan 17, 2018

Whoa this is too complicated. I expected something like helm init --with-proper-rbac. But I guess what's above will do.

I think it happened around 1.7.

@ahmetb
Copy link
Author

@ahmetb ahmetb commented Jan 17, 2018

It might be worth documenting this.

@munnerz
Copy link
Collaborator

@munnerz munnerz commented Jan 17, 2018

I agree - it's really annoying and catches me out every time (usually in demos... 🙄)! I think there must be an upstream issue for this somewhere? Couldn't find one myself though.

@munnerz
Copy link
Collaborator

@munnerz munnerz commented Jan 17, 2018

To be fair actually, the doc you linked does actually have a Example: Service account with cluster-admin role section that details what I've written above (albeit more verbosely).

I know what you are getting at - instructions (both ours referencing theirs, and theirs themselves) should be super-super clear and simple. You should have been able to debug the error message you got from reading a NOTES section or something, that links off to the RBAC doc.

@ahmetb
Copy link
Author

@ahmetb ahmetb commented Jan 17, 2018

I think it's unrealistic to think people are going to read https://github.com/kubernetes/helm/blob/master/docs/rbac.md before doing "helm init".

In cert-manager's Helm doc, there's no mention of even "helm init" so the Helm client will recommend them to run it when the "helm install" fails. So most people will just type "helm init", only to find out, there’s more to it.

Arguably Helm can do better here by ensuring it initializes itself with good defaults, but I don't know why this is the case. But I think we can do better by copy-pasting those 3 lines to the Helm documentation here.

@munnerz
Copy link
Collaborator

@munnerz munnerz commented Jan 17, 2018

True - but I don't want to repeat the Helm install docs. Would it be sufficient to add a 'Step 0: install Helm' to our install docs? Users that already have Helm configured could then skip over it.

@ahmetb
Copy link
Author

@ahmetb ahmetb commented Jan 17, 2018

I think that's reasonable. I'm assuming if your Helm doesn't have the right RBAC config, it'll fail installing most things. But if I had those 3 lines, I could just copy paste those instead of spending many minutes here. :)

jetstack-ci-bot added a commit that referenced this issue Jan 18, 2018
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Expand deployment docs, including docs on ingress-shim

**What this PR does / why we need it**:

Improves our documentation to further explain how Helm deployment works (including RBAC and extraArgs).

It also adds a doc on ingress-shim - what it does, how it works, how to configure it and how to use it.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

Fixes #256, fixes #257, fixes #136 

**Release note**:
```release-note
Improve deployment documentation
```

/cc @ahmetb
@jpds
Copy link

@jpds jpds commented Feb 14, 2018

On AWS with kops, I still get this error when installing cert-manager (even after doing the "Example: Service account with cluster-admin role" section):

release cert-manager failed: clusterroles.rbac.authorization.k8s.io "cert-manager-cert-manager" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["certificates"], APIGroups:["certmanager.k8s.io"], Verbs:["*"]} PolicyRule{Resources:["issuers"], APIGroups:["certmanager.k8s.io"], Verbs:["*"]} PolicyRule{Resources:["clusterissuers"], APIGroups:["certmanager.k8s.io"], Verbs:["*"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["*"]} PolicyRule{Resources:["events"], APIGroups:[""], Verbs:["*"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["*"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["*"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["*"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["*"]}] user=&{system:serviceaccount:kube-system:tiller a8f371ca-116c-11e8-b56e-0ad3089af66a [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[clusterroles.rbac.authorization.k8s.io "cluster-admin" not found]
@cloudify
Copy link

@cloudify cloudify commented Feb 14, 2018

Just had the exact same issue with Azure Container Service. I also tried helm reset / init to no avail.

I actually found the issue with ACS is this one: the cluster-admin role doesn't get created by default.

@munnerz
Copy link
Collaborator

@munnerz munnerz commented Feb 14, 2018

@jpds from the looks of your error message (namely: clusterroles.rbac.authorization.k8s.io "cluster-admin" not found), it appears you don't have RBAC enabled in your cluster, hence the error.

Try setting --set rbac.create=false

@rayfoss
Copy link

@rayfoss rayfoss commented Feb 19, 2018

Had this issue on GCE 1.9.2-gke.1

Should be added to the installation read me, and perhaps reopened

@munnerz
Copy link
Collaborator

@munnerz munnerz commented Feb 19, 2018

I'm not quite sure what should be added to the README here, but would be more than happy to merge PRs that people think improve clarity!

@HerrmannHinz
Copy link

@HerrmannHinz HerrmannHinz commented Feb 20, 2018

running into the same issue:

helm install --name cert-manager --namespace kube-system contrib/charts/cert-manager --set rbac.create=false

result:
Error: release cert-manager failed: clusterroles.rbac.authorization.k8s.io "cert-manager-cert-manager" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["certificates"], APIGroups:["certmanager.k8s.io"], Verbs:["*"]} PolicyRule{Resources:["issuers"], APIGroups:["certmanager.k8s.io"], Verbs:["*"]} PolicyRule{Resources:["clusterissuers"], APIGroups:["certmanager.k8s.io"], Verbs:["*"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["*"]} PolicyRule{Resources:["events"], APIGroups:[""], Verbs:["*"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["*"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["*"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["*"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["*"]}] user=&{system:serviceaccount:kube-system:default 842f836f-10d0-11e8-9452-0290d451c828 [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[]

what am i doing wrong here?

@HerrmannHinz
Copy link

@HerrmannHinz HerrmannHinz commented Feb 20, 2018

ahhhh i checked the sourcecode of the templates - it seems like its
{{- if .Values.rbac.enabled -}} - rbac.enabled
instead of rbac.create

when using rbac.enabled=false the deployment now does work for me.

@rayfoss
Copy link

@rayfoss rayfoss commented Feb 20, 2018

jetstack-ci-bot added a commit that referenced this issue Feb 25, 2018
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

docs: fix value name that disables rbac

**What this PR does / why we need it**:

Proper documentation for deploying cert-manager for k8s clusters without rbac enabled (happens to be the default for cdk on localhost).

**Which issue this PR fixes**

No issue per se, a follow-up on #256.
@EIrwin
Copy link

@EIrwin EIrwin commented Feb 20, 2019

For anybody running into this, I followed every other example and nothing worked until I read this

TLDR - I needed to create extra role binding for kube-system:default

kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
@nielskrijger
Copy link

@nielskrijger nielskrijger commented Feb 28, 2019

Had the same issue with a tillerless helm install on a new cluster in GKE. I was following the instructions for a Helm install but skipped the clusterrolebinding instructions (since tillerless helm runs locally I thought it didn't apply).

Turns out my own user, despite being IAM "owner", doesn't have cluster-admin privileges by default on GKE for a new cluster. This issue is covered in the docs under the normal non-helm install (https://docs.cert-manager.io/en/latest/getting-started/install.html), but since I was doing a "helm"(ish) install I had skipped that section.

In short the following did work:

# Install the CustomResourceDefinition resources separately
$ kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.6/deploy/manifests/00-crds.yaml

# Create the namespace for cert-manager
$ kubectl create namespace cert-manager

# Label the cert-manager namespace to disable resource validation
$ kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true

# Update your local Helm chart repository cache
$ helm repo update

# EXTRA STEP WHEN USING GKE: add the cluster-admin role to the current user
$ kubectl create clusterrolebinding cluster-admin-binding \
    --clusterrole=cluster-admin \
    --user=$(gcloud config get-value core/account)

# Install the cert-manager Helm chart using tillerless helm
$ helm tiller run cert-manager -- helm install --name cert-manager --namespace cert-manager stable/cert-manager
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

9 participants