Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable RBAC by default #1722

Closed
r2d4 opened this issue Jul 20, 2017 · 13 comments
Closed

Enable RBAC by default #1722

r2d4 opened this issue Jul 20, 2017 · 13 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@r2d4
Copy link
Contributor

r2d4 commented Jul 20, 2017

Is this a BUG REPORT or FEATURE REQUEST? (choose one): feature-request

Enable RBAC in the k8s cluster by default. A lot of tools do this already (hack/cluster-up, kubeadm, etc.). So it might bring minikube closer to CI/test/production environments. I think it would only entail changing some of the cluster addons and enabling the flag.

@r2d4 r2d4 added the kind/feature Categorizes issue or PR as related to a new feature. label Jul 20, 2017
@r2d4 r2d4 mentioned this issue Jul 24, 2017
wallrj added a commit to wallrj/navigator that referenced this issue Nov 8, 2017
Allow kube-dns and other kube-system services full access to the API.
See:
* kubernetes/minikube#1734
* kubernetes/minikube#1722
jetstack-bot added a commit to jetstack/navigator that referenced this issue Nov 8, 2017
Automatic merge from submit-queue.

Fix kube-dns RBAC issues

Allow kube-dns and other kube-system services full access to the API.
See:
* kubernetes/minikube#1734
* kubernetes/minikube#1722

Fixes: #107 

**Release note**:
```release-note
NONE
```
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 1, 2018
@timoreimann
Copy link

This would still be super desirable to have from my perspective. Happy to drive a change if there's consensus on the usefulness of the feature.

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 1, 2018
@DavidWylie
Copy link

This would be a great change to help keep local development and cluster in sync.

@kfox1111
Copy link

+1. us operators are finding devs that use minikube for development often don't come up with the right rbac rules that would allow the system to work when handed to us.

@rayterrill
Copy link

+1. Struggling through a bunch of issues because I assumed minikube would work OOTB with RBAC enabled. Looks like at least kube-dns still needs RBAC rules tweaked to work correctly under RBAC (this is from minikube v0.25.0):

E0127 22:20:07.928086       1 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system"

@varunpalekar
Copy link

It also happen to me, when I run
minikube start --apiserver-name minikube --vm-driver none --extra-config=apiserver.Authorization.Mode=RBAC

kub-dns and kub-dashboard not able to run

@seantanly
Copy link

Ref: https://gist.github.com/F21/08bfc2e3592bed1e931ec40b8d2ab6f5

The above gist added cluster-admin cluster-role to service-account kube-system:default worked for me.

@kfox1111
Copy link

will this make it in for 1.10?

@jstangroome
Copy link
Contributor

It appears that the kube-dns pod fails when minikube is started with --extra-config=apiserver.Authorization.Mode=RBAC.

Rather than granting a blanket cluster-admin role to the kube-system default service account, I inspected a new cluster built with kubeadm defaults. The obvious difference was that the kube-dns deployment should specify the kube-dns service account for its Pods.

I created the kube-dns service account via kubectl create sa -n kube-system kube-dns and modified the kube-dns Deployment to use this service account. This fixed the kube-dns issue, since the necessary clusterrolebinding (system:kube-dns) already existed in the default minikube clean start config.

@jstangroome
Copy link
Contributor

Also, kubernetes-dashboard fails due to using the default service account without the necessary role/permissions granted.

I fixed this with minikube addons disable dashboard (albeit bitten by #2281) and then manually modified to dashboard addons files at https://github.com/kubernetes/minikube/tree/v0.25.0/deploy/addons/dashboard to include the ServiceAccount, Role, and RoleBinding from https://github.com/kubernetes/dashboard/blob/v1.8.1/src/deploy/alternative/kubernetes-dashboard.yaml

It would probably also reasonable to apply https://github.com/kubernetes/dashboard/blob/v1.8.1/src/deploy/alternative/kubernetes-dashboard.yaml directly if its Service definition used a NodePort.

@Jokero
Copy link
Contributor

Jokero commented Apr 3, 2018

To be honest, I am surprised that RBAC is not enabled by default. Expected the same behavior across all kubernetes providers. If I want to use GKE, I can't just take everything prepared in minikube and deploy it to GKE, it will not work due to RBAC errors :)

@jolson490
Copy link

I believe this issue has been fixed.

Since minikube v0.26.0 the default bootstrapper is kubeadm - which enables RBAC by default.

$ minikube version
minikube version: v0.26.0
$ minikube config view
$ minikube start
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.
$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
kube-system   etcd-minikube                           1/1       Running   0          1m
kube-system   kube-addon-manager-minikube             1/1       Running   0          1m
kube-system   kube-apiserver-minikube                 1/1       Running   0          2m
kube-system   kube-controller-manager-minikube        1/1       Running   0          2m
kube-system   kube-dns-86f4d74b45-jnggk               3/3       Running   0          2m
kube-system   kube-proxy-wfmlc                        1/1       Running   0          2m
kube-system   kube-scheduler-minikube                 1/1       Running   0          1m
kube-system   kubernetes-dashboard-5498ccf677-nzdxg   1/1       Running   0          2m
kube-system   storage-provisioner                     1/1       Running   0          2m
$ kubectl get pods -n kube-system kube-apiserver-minikube -o yaml | grep mode
    - --authorization-mode=Node,RBAC

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 11, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests