Skip to content
This repository has been archived by the owner. It is now read-only.

User "system:serviceaccount:default:default" cannot get at the cluster scope. #6840

Open
dotw opened this issue Apr 25, 2017 · 6 comments

Comments

Projects
None yet
5 participants
@dotw
Copy link

commented Apr 25, 2017

I got the upper error in my env. I'm using kubeadm for 1.6.x now. any idea?

@dotw

This comment has been minimized.

Copy link
Author

commented Apr 25, 2017

I got the error when accessing http://xxx.xxx.xxx/k8s/oapi/v1

@flytreeleft

This comment has been minimized.

Copy link

commented Jun 10, 2017

You should bind service account system:serviceaccount:default:default (which is the default account bound to Pod) with role cluster-admin, just create a yaml (named like fabric8-rbac.yaml) with following contents:

# NOTE: The service account `default:default` already exists in k8s cluster.
# You can create a new account following like this:
#---
#apiVersion: v1
#kind: ServiceAccount
#metadata:
#  name: <new-account-name>
#  namespace: <namespace>

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: fabric8-rbac
subjects:
  - kind: ServiceAccount
    # Reference to upper's `metadata.name`
    name: default
    # Reference to upper's `metadata.namespace`
    namespace: default
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io

Then, apply it by running kubectl apply -f fabric8-rbac.yaml.

If you want unbind them, just run kubectl delete -f fabric8-rbac.yaml.

@sellers

This comment has been minimized.

Copy link

commented Aug 15, 2017

This seems to make every pod an "admin" pod, and gives it access to do "admin" like activities? Am I mis-reading this?

@flytreeleft

This comment has been minimized.

Copy link

commented Aug 17, 2017

@sellers Yes, the pods in the default namespace will have the admin permission. But, Fabric8 needs some super permissions to manage pods, so what the easy way to let Fabric8 work is binding the cluster role cluster-admin to the default service account.

Normally, I would like to deploy Fabric8 to another namespace, e.g. devops-platform(just run deploy command with --namespace=devops-platform). And, bind cluster-admin to system:serviceaccount:devops-platform:default.

If we know what the exact permissions that Fabric8 needs, we can create a Role to grant the needed permissions, and define a RoleBinding to bind the default service account with the role:

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: fabric8-admin
  namespace: devops-platform
rules:
# Just an example, feel free to change it
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: fabric8-rbac
  namespace: devops-platform
subjects:
  - kind: ServiceAccount
    name: default
roleRef:
  kind: Role
  name: fabric8-admin
  apiGroup: rbac.authorization.k8s.io

More details, please read https://kubernetes.io/docs/admin/authorization/rbac/ :-)

@VFT

This comment has been minimized.

Copy link

commented Aug 29, 2017

Actually,Fabric8 installed by helm has some bug on kubernetes 1.6+ because of the RBAC mode.

Such as:

  • exposecontroller
  • configmapcontroller
  • fabric8

Look up configuation of deployment:

exposecontroller :

serviceAccount: exposecontroller
serviceAccountName: exposecontroller

configmapcontroller :

serviceAccount: configmapcontroller
serviceAccountName: configmapcontroller

And fabric8:

serviceAccount: default
serviceAccountName: default

ServiceAccount fabric8 was create but unused.

So,you can edit deployment of fabric8 with serviceAccount: fabric8 and bind them to cluster-admin.

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: exposecontroller-view-sources
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: exposecontroller
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: configmapcontroller-view-sources
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: configmapcontroller
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: fabric8-view-sources
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: fabric8

Pay attention to the --namespace when executing kubectl create -f

@Fettah

This comment has been minimized.

Copy link

commented Sep 18, 2018

can you try this:

kubectl create clusterrolebinding default-admin --clusterrole cluster-admin --serviceaccount=default:default

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.