Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

apiserver.Authorization.Mode=RBAC dashboard CrashLoopBackOff: secrets is forbidden: User cannot create secrets #2510

Closed
berndtj opened this issue Feb 2, 2018 · 19 comments
Labels
co/apiserver Issues relating to apiserver configuration (--extra-config) co/dashboard dashboard related issues ev/CrashLoopBackOff Crash Loop Backoff events help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@berndtj
Copy link

berndtj commented Feb 2, 2018

Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Please provide the following details:

Environment:

Minikube version (use minikube version): v0.25.0

  • OS (e.g. from /etc/os-release): Mac OS 10.13.3
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): hyper kit
  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION):
  • Install tools:
  • Others:
    The above can be generated in one go with the following commands (can be copied and pasted directly into your terminal):
minikube version
echo "";
echo "OS:";
cat /etc/os-release
echo "";
echo "VM driver": 
grep DriverName ~/.minikube/machines/minikube/config.json
echo "";
echo "ISO version";
grep -i ISO ~/.minikube/machines/minikube/config.json

What happened: Trying to bring up minikube with default RBAC roles. Simply running minikube start --vm-driver hyperkit without the extra-config yields no roles. To get the default roles, I added the extra-config: minikube start --vm-driver hyperkit --extra-config=apiserver.Authorization.Mode=RBAC.

The expected roles are present, but the dashboard and dns pods do not fully come up:

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY     STATUS             RESTARTS   AGE
kube-system   kube-addon-manager-minikube             1/1       Running            0          1m
kube-system   kube-dns-54cccfbdf8-vqdgw               2/3       Running            0          1m
kube-system   kubernetes-dashboard-77d8b98585-djkcf   0/1       CrashLoopBackOff   3          1m
kube-system   storage-provisioner                     1/1       Running            0          1m
kube-system   tiller-deploy-587df449fb-b8wd6          1/1       Running            0          50s

Tailing the dashboard logs shows:

panic: secrets is forbidden: User "system:serviceaccount:kube-system:default" cannot create secrets in the namespace "kube-system"

The error can be fixed by creating the missing clusterrolebinding:

$ kubectl create clusterrolebinding kube-system-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
clusterrolebinding "kube-system-cluster-admin" created

This should exist by default.

What you expected to happen: All pods come up without any intervention.

How to reproduce it (as minimally and precisely as possible):

minikube start --vm-driver hyperkit --extra-config=apiserver.Authorization.Mode=RBAC

Output of minikube logs (if applicable):

Anything else do we need to know: The kubeadm bootstrapper installs the RBAC roles correctly by default without requiring the extra-config.

@scottgreenup
Copy link

Digging into this, I found the following in the kubedns container under the kube-dns pod:

$ kubectl logs kube-dns-... kubedns
E0221 23:56:46.848563       1 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system"
E0221 23:56:46.848806       1 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:default" cannot list services at the cluster scope
E0221 23:56:46.848835       1 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:serviceaccount:kube-system:default" cannot list endpoints at the cluster scope

My pods were in the same state as @berndtj

My fix for this came from kubernetes-incubator/service-catalog/issues/1069:

kubectl create clusterrolebinding fixRBAC --clusterrole=cluster-admin --serviceaccount=kube-system:default

Environment:

  • Minikube version: v0.25.0
  • OS: macOS 10.13.3
  • VM Driver: hyperkit
  • ISO version: v0.25.1
  • Minikube CMD: minikube start --kubernetes-version v1.9.0 --vm-driver=hyperkit --extra-config='apiserver.Authorization.Mode=RBAC'

@kfox1111
Copy link

Looks like this is fixed now? --boostrapper=kubeadm is the default and seems enabled:
kubectl get pods -n kube-system kube-apiserver-minikube -o yaml | grep RBAC
- --authorization-mode=Node,RBAC

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 26, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 25, 2018
@robsonpeixoto
Copy link

News about it?

@kelbyers
Copy link

kelbyers commented Sep 6, 2018

I am having this issue, and the above workaround to create the clusterrolebinding worked for me.

$ minikube version
minikube version: v0.28.2
example detailing symptoms and workaround

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY     STATUS             RESTARTS   AGE
kube-system   etcd-minikube                           1/1       Running            0          1m
kube-system   kube-addon-manager-minikube             1/1       Running            1          1m
kube-system   kube-apiserver-minikube                 1/1       Running            0          1m
kube-system   kube-controller-manager-minikube        1/1       Running            0          1m
kube-system   kube-dns-86f4d74b45-4hhn2               3/3       Running            1          4m
kube-system   kube-proxy-4cb8c                        1/1       Running            0          1m
kube-system   kube-scheduler-minikube                 1/1       Running            1          1m
kube-system   kubernetes-dashboard-5498ccf677-dq2ct   0/1       CrashLoopBackOff   4          4m
kube-system   storage-provisioner                     1/1       Running            2          4m
$ kubectl logs -n kube-system   kubernetes-dashboard-5498ccf677-dq2ct | tail
2018/09/06 15:06:20 Synchronizer kubernetes-dashboard-key-holder-kube-system exited with error: kubernetes-dashboard-key-holder-kube-system watch ended with timeout
panic: secrets is forbidden: User "system:serviceaccount:kube-system:default" cannot create secrets in the namespace "kube-system"

goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/auth/jwe.(*rsaKeyHolder).init(0xc4204a16c0)
	/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/auth/jwe/keyholder.go:131 +0x2d3
github.com/kubernetes/dashboard/src/app/backend/auth/jwe.NewRSAKeyHolder(0x1a79e00, 0xc42034eb40, 0xc42034eb40, 0x1278c20)
	/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/auth/jwe/keyholder.go:170 +0x83
main.initAuthManager(0x1a79300, 0xc420062900, 0x384, 0x1, 0x1)
	/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/dashboard.go:161 +0x12f
main.main()
	/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/dashboard.go:95 +0x27b
$ kubectl create clusterrolebinding kube-system-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
clusterrolebinding.rbac.authorization.k8s.io/kube-system-cluster-admin created

$ kubectl delete pods -n kube-system   kubernetes-dashboard-5498ccf677-dq2ct
pod "kubernetes-dashboard-5498ccf677-dq2ct" deleted

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
kube-system   etcd-minikube                           1/1       Running   0          4m
kube-system   kube-addon-manager-minikube             1/1       Running   1          3m
kube-system   kube-apiserver-minikube                 1/1       Running   0          3m
kube-system   kube-controller-manager-minikube        1/1       Running   0          3m
kube-system   kube-dns-86f4d74b45-4hhn2               3/3       Running   1          6m
kube-system   kube-proxy-4cb8c                        1/1       Running   0          4m
kube-system   kube-scheduler-minikube                 1/1       Running   1          4m
kube-system   kubernetes-dashboard-5498ccf677-hnsck   1/1       Running   0          1m
kube-system   storage-provisioner                     1/1       Running   2          6m

@tstromberg tstromberg changed the title Default minikube install with RBAC enabled fails to come up apiserver.Authorization.Mode=RBAC causes dashboard CrashLoopBackOff: secrets is forbidden: User cannot create secrets in the namespace "kube-system" Sep 20, 2018
@tstromberg tstromberg added area/rbac co/apiserver Issues relating to apiserver configuration (--extra-config) co/dashboard dashboard related issues ev/CrashLoopBackOff Crash Loop Backoff events labels Sep 20, 2018
@tstromberg tstromberg changed the title apiserver.Authorization.Mode=RBAC causes dashboard CrashLoopBackOff: secrets is forbidden: User cannot create secrets in the namespace "kube-system" apiserver.Authorization.Mode=RBAC dashboard CrashLoopBackOff: secrets is forbidden: User cannot create secrets Sep 20, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@daxgames
Copy link

/reopen

@k8s-ci-robot
Copy link
Contributor

@daxgames: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@jvleminc
Copy link

jvleminc commented Dec 10, 2018

Issue is still open in minikube version: v0.30.0 (Windows 10, VirtualBox).

Running the aforementioned command fixed it :-)

kubectl create clusterrolebinding fixRBAC --clusterrole=cluster-admin --serviceaccount=kube-system:default

@balopat balopat reopened this Dec 13, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@jvleminc
Copy link

/reopen

@k8s-ci-robot
Copy link
Contributor

@jvleminc: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@halfer
Copy link

halfer commented Apr 22, 2019

Thanks @kelbyers, your kubectl commands got me out of this pickle.

@afbjorklund afbjorklund reopened this Jul 14, 2019
@afbjorklund afbjorklund removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jul 14, 2019
@afbjorklund
Copy link
Collaborator

Still happens (sometimes) with v1.2.0, the workaround above fixed it.

@tstromberg tstromberg added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. priority/backlog Higher priority than priority/awaiting-more-evidence. labels Jul 17, 2019
@tstromberg
Copy link
Contributor

It would be nice if the dashboard command could detect this quirk and apply the aforementioned fixRBAC fix before starting the dashboard. Like, somewhere around here:

Help wanted!

@tstromberg
Copy link
Contributor

@berndtj : I believe this issue is now addressed by minikube v1.4, as it uses a different dashboard config. If you still see this issue with minikube v1.4 or higher, please reopen this issue by commenting with /reopen

Thank you for reporting this issue!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/apiserver Issues relating to apiserver configuration (--extra-config) co/dashboard dashboard related issues ev/CrashLoopBackOff Crash Loop Backoff events help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests