New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
apiserver.Authorization.Mode=RBAC dashboard CrashLoopBackOff: secrets is forbidden: User cannot create secrets #2510
Comments
Digging into this, I found the following in the kubedns container under the kube-dns pod:
My pods were in the same state as @berndtj My fix for this came from kubernetes-incubator/service-catalog/issues/1069:
Environment:
|
Looks like this is fixed now? --boostrapper=kubeadm is the default and seems enabled: |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
News about it? |
I am having this issue, and the above workaround to create the clusterrolebinding worked for me.
example detailing symptoms and workaround
|
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@daxgames: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Issue is still open in minikube version: v0.30.0 (Windows 10, VirtualBox). Running the aforementioned command fixed it :-)
|
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@jvleminc: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Thanks @kelbyers, your |
Still happens (sometimes) with v1.2.0, the workaround above fixed it. |
It would be nice if the dashboard command could detect this quirk and apply the aforementioned fixRBAC fix before starting the dashboard. Like, somewhere around here: minikube/cmd/minikube/cmd/dashboard.go Line 97 in 4178c44 Help wanted! |
@berndtj : I believe this issue is now addressed by minikube v1.4, as it uses a different dashboard config. If you still see this issue with minikube v1.4 or higher, please reopen this issue by commenting with Thank you for reporting this issue! |
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Please provide the following details:
Environment:
Minikube version (use
minikube version
): v0.25.0cat ~/.minikube/machines/minikube/config.json | grep DriverName
): hyper kitcat ~/.minikube/machines/minikube/config.json | grep -i ISO
orminikube ssh cat /etc/VERSION
):The above can be generated in one go with the following commands (can be copied and pasted directly into your terminal):
What happened: Trying to bring up minikube with default RBAC roles. Simply running
minikube start --vm-driver hyperkit
without the extra-config yields no roles. To get the default roles, I added the extra-config:minikube start --vm-driver hyperkit --extra-config=apiserver.Authorization.Mode=RBAC
.The expected roles are present, but the dashboard and dns pods do not fully come up:
Tailing the dashboard logs shows:
The error can be fixed by creating the missing clusterrolebinding:
This should exist by default.
What you expected to happen: All pods come up without any intervention.
How to reproduce it (as minimally and precisely as possible):
minikube start --vm-driver hyperkit --extra-config=apiserver.Authorization.Mode=RBAC
Output of
minikube logs
(if applicable):Anything else do we need to know: The kubeadm bootstrapper installs the RBAC roles correctly by default without requiring the extra-config.
The text was updated successfully, but these errors were encountered: