New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable RBAC by default #1722

Closed
r2d4 opened this Issue Jul 20, 2017 · 13 comments

Comments

Projects
None yet
@r2d4
Member

r2d4 commented Jul 20, 2017

Is this a BUG REPORT or FEATURE REQUEST? (choose one): feature-request

Enable RBAC in the k8s cluster by default. A lot of tools do this already (hack/cluster-up, kubeadm, etc.). So it might bring minikube closer to CI/test/production environments. I think it would only entail changing some of the cluster addons and enabling the flag.

@r2d4 r2d4 added the kind/feature label Jul 20, 2017

@r2d4 r2d4 referenced this issue Jul 24, 2017

Closed

RBAC is broken #1734

wallrj added a commit to wallrj/navigator that referenced this issue Nov 8, 2017

Fix kube-dns RBAC issues.
Allow kube-dns and other kube-system services full access to the API.
See:
* kubernetes/minikube#1734
* kubernetes/minikube#1722

jetstack-bot added a commit to jetstack/navigator that referenced this issue Nov 8, 2017

Merge pull request #108 from wallrj/107-kube-system-rbac
Automatic merge from submit-queue.

Fix kube-dns RBAC issues

Allow kube-dns and other kube-system services full access to the API.
See:
* kubernetes/minikube#1734
* kubernetes/minikube#1722

Fixes: #107 

**Release note**:
```release-note
NONE
```
@fejta-bot

This comment has been minimized.

fejta-bot commented Jan 1, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@timoreimann

This comment has been minimized.

timoreimann commented Jan 1, 2018

This would still be super desirable to have from my perspective. Happy to drive a change if there's consensus on the usefulness of the feature.

/remove-lifecycle stale

@DavidWylie

This comment has been minimized.

DavidWylie commented Jan 24, 2018

This would be a great change to help keep local development and cluster in sync.

@kfox1111

This comment has been minimized.

kfox1111 commented Jan 24, 2018

+1. us operators are finding devs that use minikube for development often don't come up with the right rbac rules that would allow the system to work when handed to us.

@rayterrill

This comment has been minimized.

rayterrill commented Jan 27, 2018

+1. Struggling through a bunch of issues because I assumed minikube would work OOTB with RBAC enabled. Looks like at least kube-dns still needs RBAC rules tweaked to work correctly under RBAC (this is from minikube v0.25.0):

E0127 22:20:07.928086       1 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system"
@varunpalekar

This comment has been minimized.

varunpalekar commented Jan 30, 2018

It also happen to me, when I run
minikube start --apiserver-name minikube --vm-driver none --extra-config=apiserver.Authorization.Mode=RBAC

kub-dns and kub-dashboard not able to run

@seantanly

This comment has been minimized.

seantanly commented Feb 21, 2018

Ref: https://gist.github.com/F21/08bfc2e3592bed1e931ec40b8d2ab6f5

The above gist added cluster-admin cluster-role to service-account kube-system:default worked for me.

@kfox1111

This comment has been minimized.

kfox1111 commented Mar 15, 2018

will this make it in for 1.10?

@jstangroome

This comment has been minimized.

Contributor

jstangroome commented Mar 22, 2018

It appears that the kube-dns pod fails when minikube is started with --extra-config=apiserver.Authorization.Mode=RBAC.

Rather than granting a blanket cluster-admin role to the kube-system default service account, I inspected a new cluster built with kubeadm defaults. The obvious difference was that the kube-dns deployment should specify the kube-dns service account for its Pods.

I created the kube-dns service account via kubectl create sa -n kube-system kube-dns and modified the kube-dns Deployment to use this service account. This fixed the kube-dns issue, since the necessary clusterrolebinding (system:kube-dns) already existed in the default minikube clean start config.

@jstangroome

This comment has been minimized.

Contributor

jstangroome commented Mar 22, 2018

Also, kubernetes-dashboard fails due to using the default service account without the necessary role/permissions granted.

I fixed this with minikube addons disable dashboard (albeit bitten by #2281) and then manually modified to dashboard addons files at https://github.com/kubernetes/minikube/tree/v0.25.0/deploy/addons/dashboard to include the ServiceAccount, Role, and RoleBinding from https://github.com/kubernetes/dashboard/blob/v1.8.1/src/deploy/alternative/kubernetes-dashboard.yaml

It would probably also reasonable to apply https://github.com/kubernetes/dashboard/blob/v1.8.1/src/deploy/alternative/kubernetes-dashboard.yaml directly if its Service definition used a NodePort.

@Jokero

This comment has been minimized.

Contributor

Jokero commented Apr 3, 2018

To be honest, I am surprised that RBAC is not enabled by default. Expected the same behavior across all kubernetes providers. If I want to use GKE, I can't just take everything prepared in minikube and deploy it to GKE, it will not work due to RBAC errors :)

@jolson490

This comment has been minimized.

jolson490 commented Jun 13, 2018

I believe this issue has been fixed.

Since minikube v0.26.0 the default bootstrapper is kubeadm - which enables RBAC by default.

$ minikube version
minikube version: v0.26.0
$ minikube config view
$ minikube start
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.
$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
kube-system   etcd-minikube                           1/1       Running   0          1m
kube-system   kube-addon-manager-minikube             1/1       Running   0          1m
kube-system   kube-apiserver-minikube                 1/1       Running   0          2m
kube-system   kube-controller-manager-minikube        1/1       Running   0          2m
kube-system   kube-dns-86f4d74b45-jnggk               3/3       Running   0          2m
kube-system   kube-proxy-wfmlc                        1/1       Running   0          2m
kube-system   kube-scheduler-minikube                 1/1       Running   0          1m
kube-system   kubernetes-dashboard-5498ccf677-nzdxg   1/1       Running   0          2m
kube-system   storage-provisioner                     1/1       Running   0          2m
$ kubectl get pods -n kube-system kube-apiserver-minikube -o yaml | grep mode
    - --authorization-mode=Node,RBAC
@fejta-bot

This comment has been minimized.

fejta-bot commented Sep 11, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@tstromberg tstromberg closed this Sep 19, 2018

@jboyd01 jboyd01 referenced this issue Sep 28, 2018

Merged

Update minikube start docs #2365

1 of 10 tasks complete
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment