Skip to content
This repository has been archived by the owner on Feb 22, 2022. It is now read-only.

[stable/prometheus] "prometheus-prometheus-kube-state-metrics" is forbidden #3504

Closed
prkstaff opened this issue Jan 31, 2018 · 6 comments
Closed
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@prkstaff
Copy link

prkstaff commented Jan 31, 2018

Version of Helm and Kubernetes:
kubernetes 1.8.5-gke.0 in GCE
Client: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}

Which chart:
stable/prometheus

What happened:
Got error after running:

kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=$(gcloud info | grep Account | cut -d '[' -f 2 | cut -d ']' -f 1)

helm install -f values.yaml stable/prometheus --name prometheus --namespace prometheus --set rbac.create=true

Error: release prometheus failed: clusterroles.rbac.authorization.k8s.io "prometheus-prometheus-kube-state-metrics" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["namespaces"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["namespaces"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["persistentvolumeclaims"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["persistentvolumeclaims"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["resourcequotas"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["resourcequotas"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["replicationcontrollers"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["replicationcontrollers"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["limitranges"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["limitranges"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["persistentvolumeclaims"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["persistentvolumeclaims"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["daemonsets"], APIGroups:["extensions"], Verbs:["list"]} PolicyRule{Resources:["daemonsets"], APIGroups:["extensions"], Verbs:["watch"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["list"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["watch"]} PolicyRule{Resources:["replicasets"], APIGroups:["extensions"], Verbs:["list"]} PolicyRule{Resources:["replicasets"], APIGroups:["extensions"], Verbs:["watch"]} PolicyRule{Resources:["statefulsets"], APIGroups:["apps"], Verbs:["get"]} PolicyRule{Resources:["statefulsets"], APIGroups:["apps"], Verbs:["list"]} PolicyRule{Resources:["statefulsets"], APIGroups:["apps"], Verbs:["watch"]} PolicyRule{Resources:["cronjobs"], APIGroups:["batch"], Verbs:["list"]} PolicyRule{Resources:["cronjobs"], APIGroups:["batch"], Verbs:["watch"]} PolicyRule{Resources:["jobs"], APIGroups:["batch"], Verbs:["list"]} PolicyRule{Resources:["jobs"], APIGroups:["batch"], Verbs:["watch"]}] user=&{system:serviceaccount:kube-system:default e045ebc9-bdb0-11e7-891d-42010af002fe [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[PolicyRule{Resources:["selfsubjectaccessreviews"], APIGroups:["authorization.k8s.io"], Verbs:["create"]} PolicyRule{NonResourceURLs:["/api" "/api/" "/apis" "/apis/" "/healthz" "/swaggerapi" "/swaggerapi/*" "/version"], Verbs:["get"]} PolicyRule{NonResourceURLs:["/swagger-2.0.0.pb-v1"], Verbs:["get"]} PolicyRule{NonResourceURLs:["/swagger.json"], Verbs:["get"]}] ruleResolutionErrors=[]

What you expected to happen:
no errors

Anything else we need to know:
I also gave permissions to helm following this intructions:
https://gist.github.com/mgoodness/bd887830cd5d483446cc4cd3cb7db09d

So my user have clusterrole cluster-admin and helm also. What permissions are missing?

@prkstaff
Copy link
Author

prkstaff commented Feb 1, 2018

I found the problem.
Despite of running the command:
helm init --service-account tiller as instructed here
and having the output as follow:

$HELM_HOME has been configured at /home/airstrip/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!

there is no feedback if the new service account is set. or not. I also didnt find any command or way of checking the current service account in use by the current installation of tiller.
I fixed the problem by uninstalling helm with:
helm reset
and installing again with the:
helm init --service-account tiller
Then I was able to install the stable/prometheus chart with no issue.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 2, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 1, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@kedare
Copy link

kedare commented Jul 2, 2018

No way to deploy it without giving full cluster access to tiller ? We are using per-namespace tiller deployments

@dwaiba
Copy link

dwaiba commented Aug 12, 2018

Works fine with this [RBAC Disabled]

git clone https://github.com/coreos/prometheus-operator.git
cd prometheus-operator
kubectl apply -f bundle.yaml
helm install helm/prometheus-operator --name prometheus-operator --namespace monitoring --set rbacEnable=false --timeout 1000 --wait
mkdir -p helm/kube-prometheus/charts
helm package -d helm/kube-prometheus/charts helm/alertmanager helm/grafana helm/prometheus  helm/exporter-kube-dns \
helm/exporter-kube-scheduler helm/exporter-kubelets helm/exporter-node helm/exporter-kube-controller-manager \
helm/exporter-kube-etcd helm/exporter-kube-state helm/exporter-coredns helm/exporter-kubernetes
helm install helm/kube-prometheus --name kube-prometheus --name kube-prometheus --namespace monitoring --set global.rbacEnable=false

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants