Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubeconfig files cannot login the dashboard #2474

Closed
txg1550759 opened this issue Oct 13, 2017 · 45 comments
Closed

kubeconfig files cannot login the dashboard #2474

txg1550759 opened this issue Oct 13, 2017 · 45 comments

Comments

@txg1550759
Copy link

txg1550759 commented Oct 13, 2017

Environment
Dashboard version:1.71
Kubernetes version:1.76
Operating system:centos7 
Node.js version:
Go version:
Steps to reproduce

the dashboard response Not enough data to create auth info structure.

cat kubeconfig
apiVersion: v1
clusters:

  • cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR3akNDQXFxZ0F3SUJBZ0lVZDVOb3JqbTRST05jVEk4eDBGMUZKQjgvdDlnd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1p6RUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0ZOb1pXNTZhR1Z1TVJFd0R3WURWUVFIRXdoVAphR1Z1ZW1obGJqRU1NQW9HQTFVRUNoTURhemh6TVE4d0RRWURWUVFMRXdaVGVYTjBaVzB4RXpBUkJnTlZCQU1UCkNtdDFZbVZ5Ym1WMFpYTXdIaGNOTVRjd056SXhNRGN6TURBd1doY05Nakl3TnpJd01EY3pNREF3V2pCbk1Rc3cKQ1FZRFZRUUdFd0pEVGpFUk1BOEdBMVVFQ0JNSVUyaGxibnBvWlc0eEVUQVBCZ05WQkFjVENGTm9aVzU2YUdWdQpNUXd3Q2dZRFZRUUtFd05yT0hNeER6QU5CZ05WQkFzVEJsTjVjM1JsYlRFVE1CRUdBMVVFQXhNS2EzVmlaWEp1ClpYUmxjekNDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFMbHVlMlZPTnp5Y3Zxak4KempmYktBSFV4TUx5M3dGUnNhY0FOeEh2R1JNZXFNM3MxcDFJek1kUkc0c2ZSMG5DT0xxOFBHS2g1UzlCQlh2aApNb0RKc0tQQWZic3QyaHpkYThNYUNKMkVYLzdoTFhicUFLMXZZR1E0bEY0NUF5YWEwcVBsc0xlVEM0Wm1lYnZ4CklkajV3MDRGdnl0cVZoUGw3TmIzcEtVWjJ3a2FGREpIVEszZUlhWkc5QTZGMkNpNTYyOTN0MFpLZDJmZWdWMjEKUEtING5xRllXREk4MU5QWFk3UmNuT29ST0NFeDBQLzh4eHRnT1VIdVVUQ29Lc2tyWUhOWjhzc04vVjM3YVY5bQo5TjQ4UHE3RjBsVFN2a1gxaGIzM0RMK0thT0VTa05UYzRJWVJkbHlBaTNHbmJZSXgwU1gvY2swa0NHWWgxc2ZOCmhpUStPcDBDQXdFQUFhTm1NR1F3RGdZRFZSMFBBUUgvQkFRREFnRUdNQklHQTFVZEV3RUIvd1FJTUFZQkFmOEMKQVFJd0hRWURWUjBPQkJZRUZMcmJid2lTQ0xHNENWa2NRL1VaOXBXZVlsMGZNQjhHQTFVZEl3UVlNQmFBRkxyYgpid2lTQ0xHNENWa2NRL1VaOXBXZVlsMGZNQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUJBUUJhWmxaMVRZTE1mTGdPClZCeFhiaWE2TE9FaWQwQ3dZTVJVN2tnMmRTYVVGQjY3THNna3ZNT1NxVTlzR1ZienBwOFlscHJYVk52VXV0ajkKOW9EVExCeDA5aG40SnZXSUIwSXNxNTlQc0NxSEtaZlR3UXNXWHFJUFNkL2w5R0tJRVJxM2ZYZFl1QVpMZ1NldwpIUzdLdkc4Mm9oMG5GRGV1UVowSEpRQ09tWG1BdklwTnZLd3p3TTBtdWl6MklDVkhrVEhIZFZidUJMcTJsYnRMCjdld29sS3VSRmZTQk1oRG4wdUxpTmJYZGY1dGFNVUxvOUdjS3Fnc0hFMEVnbU0xMjJPOHEyVmxLeGZBOTRTSXIKNjFvdndoTHhubVZOYkRXdTE4dDhPYTVUQkFxSDgyczJrQ0dZRXpZNjlKS0Vaam1LZzVsUjBCcEd1U1JVZXJhaApOT3VRSG5ETgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://192.168.0.5:6443
    name: kubernetes
    contexts:
  • context:
    cluster: kubernetes
    namespace: tuxiaogang
    user: tuxiaogang
    name: kubernetes
    current-context: kubernetes
    kind: Config
    preferences: {}
    users:
  • name: tuxiaogang
    user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQyekNDQXNPZ0F3SUJBZ0lVRForNW9kRWlkWVRvM1FnUFlIQWZoR0xWZGpBd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1p6RUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0ZOb1pXNTZhR1Z1TVJFd0R3WURWUVFIRXdoVAphR1Z1ZW1obGJqRU1NQW9HQTFVRUNoTURhemh6TVE4d0RRWURWUVFMRXdaVGVYTjBaVzB4RXpBUkJnTlZCQU1UCkNtdDFZbVZ5Ym1WMFpYTXdIaGNOTVRjeE1ERXpNVE0xTVRBd1doY05NamN4TURFeE1UTTFNVEF3V2pCbk1Rc3cKQ1FZRFZRUUdFd0pEVGpFUk1BOEdBMVVFQ0JNSVUyaGxibnBvWlc0eEVUQVBCZ05WQkFjVENGTm9aVzU2YUdWdQpNUXd3Q2dZRFZRUUtFd05yT0hNeER6QU5CZ05WQkFzVEJsTjVjM1JsYlRFVE1CRUdBMVVFQXhNS2RIVjRhV0Z2CloyRnVaekNDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFMeU9sWk5VL2RNb01ZT1MKWWZ6a1oxYUF2T2Rwd1BxRTgrempNdW9DSUVRQnJiMDVzbGF4ekYvYW9hQzcveE5rQW4ycVZVbEwyQlB2bEJ5MApBVEtwMFh4TlRpdW1sZGJZaFYxZmlMbysxY2VpajU2d3NITGNkNEZUeU56NG11SHFYUTA3NStXRDFRRWpZeFEwCktPRzJyQlg1YmtJMDJMUVIvc2U4SWZIdEdUQ3VFWTJwcndyRUl4UWk4b0FRazNRLzI5SDdpcjB5ZWxPWkxxdjIKcXhlRjc1N1hXZFNMWmN1WmNBV2RNWks0VlA5alJBeG9yVmpubFZkU2drUnBpeTA2Z2dZTUk2OHp0TkppNEw3TQpVQXp3WUZRUDhKZ1BQb3RmdEY1MzEwalNHRnhYejNoZHhLQjNWNGhJZlFxbHpkaGY1SEgydEVXTlVwQ0Y2YUwzCnBFU2V4ZGNDQXdFQUFhTi9NSDB3RGdZRFZSMFBBUUgvQkFRREFnV2dNQjBHQTFVZEpRUVdNQlFHQ0NzR0FRVUYKQndNQkJnZ3JCZ0VGQlFjREFqQU1CZ05WSFJNQkFmOEVBakFBTUIwR0ExVWREZ1FXQkJTcXdsOEo3VGlMVXVEawpjaVhwQ0d6YmIyaUJjREFmQmdOVkhTTUVHREFXZ0JTNjIyOElrZ2l4dUFsWkhFUDFHZmFWbm1KZEh6QU5CZ2txCmhraUc5dzBCQVFzRkFBT0NBUUVBU0NlUStnL0xhV1JEeGZSVzU1T0ZHK3M3SUY3RVZYSlI2RG5ObWVaN2NncjUKaWJWUzZzSElKL1ZOOXBnMlZFWWJoZ3B1ZmdKelM4Mm5ibVhUMUMwYlJ5d1ZRekhxaVNKWkhRdStoL2YvQVMyUApjTUtiY3YzS1dzL3dtekhCZmR3eFdBdTVQektEekJJUDhFNTg3U2ZJU2FZbGtKWm1iN1FYVWN5TEU5bUF3blVCCmVoL2Erd0tFa3ZBNXNhS3Y5NUNyMnRNbmM5MjJJcDB0SUh2RlBhclk4OFdWSXFhdnJWdnY5cjJ4Nm1nOHV0Y0wKZzczc0dzclZYRVFOTXIwcWsxMVd2SzV3dW0rRkdubW4zQUZuaG55ODI1SUN5VGJrV0dOdENlWEt2V0wyZUovdApSNXhuRW5wMjRZTm5iZzBybVB0YVUyeE1wa3pYVCtWcHJBOEE3bVpvV0E9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdkk2VmsxVDkweWd4ZzVKaC9PUm5Wb0M4NTJuQStvVHo3T015NmdJZ1JBR3R2VG15ClZySE1YOXFob0x2L0UyUUNmYXBWU1V2WUUrK1VITFFCTXFuUmZFMU9LNmFWMXRpRlhWK0l1ajdWeDZLUG5yQ3cKY3R4M2dWUEkzUGlhNGVwZERUdm41WVBWQVNOakZEUW80YmFzRmZsdVFqVFl0QkgreDd3aDhlMFpNSzRSamFtdgpDc1FqRkNMeWdCQ1RkRC9iMGZ1S3ZUSjZVNWt1cS9hckY0WHZudGRaMUl0bHk1bHdCWjB4a3JoVS8yTkVER2l0CldPZVZWMUtDUkdtTExUcUNCZ3dqcnpPMDBtTGd2c3hRRFBCZ1ZBL3dtQTgraTErMFhuZlhTTklZWEZmUGVGM0UKb0hkWGlFaDlDcVhOMkYva2NmYTBSWTFTa0lYcG92ZWtSSjdGMXdJREFRQUJBb0lCQVFDeWx6WXl3c2hhekhJQgpWWTk3MFBYVHA4SEVTWlVmY3ZmNlFjTkNnMXIrTHJ6WlFpR1pIWFFld2R4ZWVsR0Jrek1NeFYxY08vcmYvd1pCCkhYa1kvR0ZQSTRWTHNNK3hHNGxOeENPamk4bzkrTW1oRzJjMGszNlpQcnM4R0RmU2pJRXYvTEtLMzQvTE1USXgKdTZtUkI4ejhUekRRZ205U05zMGpieHlUb09kQUE2cyt4QXBWS215LysySjB1bjFUbnM3YmZMVEFXS21HNjR1dwp1cW45a3pBRkRXL1MrNms0clJrcGUvOGhzYklQRW1lRWE4dkJZSG54ZkRqVXI2NjVBQ1ZDNHorQy9RTUF5Um5vCkdFcXJGempURW45Unk0YnNpMWFoSHRvbDRqNW5IZERHVnIvMWJteC85d0dpaVdTc3JYS3FPMnhxMm1sV25sV1MKL1Z5ays1QXhBb0dCQVBtM2dEd2VGR2xlSERDUWlwdXhvMkFEQkVUMUc0YUd6QlozKzlaVVpmdnArNUFoWXJNcAp3dklKUVc0N2VIclVvQUpaOUxUcnpmWXJ5L3lqOWREN3JxeWZXaEI4QThsbVluVVd3eUZvdDN5aG1MOFJ2K2ZQCnB5TzFnbXpWd29FSXZ1QU5VZTQrNmlWTXRac29VVjJwZzFDeHNicEJodWRYb3grNzZjcnRLRGIvQW9HQkFNRk4KSW8yS2lETWQ2d0FESkdvUXYxS2NZRjQwYXUzcGswSTZMbDBrdzVKMUdNNDdCeWk5Syt6VU1EV09tY2ZwQjVtYwo1UWVreDVZaFJyWWZrdFNHQ2V6U2ZMTEJUdXErQXExL2dLMVRqTGF6ZnMwS3dSMFYweXRmV1M3QzUwRm45akVKCmkxOUtaSWxIdEhyRDNEejhPR216eC9Rak84R0toV3kraS9TWHdRa3BBb0dCQUtSbGF2V284OVVlVUw2a0dheEEKT1FjM1ZUTTBqZ2QxYkp5S0p2QkdKZEcvaTQ2cWUvanBVRjdaT3dzZitjUWJnSytybXc4VWdrWkROUXJBd2s3dgpzbUlRa2xGeDQyaE9rQmozZ0VUWlZKcW5KQkQ5MVhIOTRkSC9aN3JReXpqNWtmZWNyVWlFZ005SGZmT0VpblIzCjZXeFJYMmo0UktDK3NEUnZHSTR3clIzdkFvR0FCWVkvMDQyL0FMNzlKVjN4bjNwbERXWmN0clNHemMvY0hvdHQKSWNwWU1JcGFNQ0t0dkxOVFd3eGhhRlp2L0sralFQZWo4QWo4ajBUYU1ZQkxnUGxudFRYNnpGMEw5VmVDMmhTSAp4K3hZWEN4YkZsOFZUOUI4M1lOM0dBZ0g5ZTJUc3FrVUs2QURxWXk4RXJvZ1JEbnRIdEE5aWJPc0ZJYng4ejZxCjMwMnEvYWtDZ1lBNWRkb1p4Y2ljb1RvSmJRMWg0aDhmSSt1UlhUdk1hM3VDNmcxNlQ2cUcvQXgxRk5iSll3SkUKS1UxcmgwUkRkMXYzdWJPRnJ4Uk1SVDNLK3F1YjZqbEdpMkcvR1Q0Wm02b3o1TUVMajFGb2g4cGplSjNqdkFkTQpnRm5kdTMxaFdlbkJOWFBEQ1NZTDFTVjl1NlFqVlJtSlRqeHEzSGdEVlVyYUhjNWpjbkFqVkE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

cat [root@master1 ~]# cat /lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
--advertise-address=192.168.0.5
--bind-address=192.168.0.5
--insecure-bind-address=192.168.0.5
--kubelet-https=true
--runtime-config=rbac.authorization.k8s.io/v1beta1
--authorization-mode=RBAC
--experimental-bootstrap-token-auth
--token-auth-file=/etc/kubernetes/ssl/token.csv
--service-cluster-ip-range=172.17.0.0/16
--service-node-port-range=300-9000
--tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem
--tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem
--client-ca-file=/etc/kubernetes/ssl/ca.pem
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem
--etcd-cafile=/etc/kubernetes/ssl/ca.pem
--etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem
--etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem
--etcd-servers=https://192.168.0.8:2379,https://192.168.0.9:2379,https://192.168.0.10:2379
--enable-swagger-ui=true
--allow-privileged=true
--apiserver-count=3
--audit-log-maxage=30
--audit-log-maxbackup=3
--audit-log-maxsize=100
--audit-log-path=/var/lib/audit.log
--event-ttl=1h
--v=2
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

Observed result
Expected result
Comments
@txg1550759 txg1550759 changed the title kubeconfig files not login the dashboard kubeconfig files cannot login the dashboard Oct 13, 2017
@floreks
Copy link
Member

floreks commented Oct 13, 2017

Works as intended. Read our Access Control guide on wiki pages to find out how it works.

@kachkaev
Copy link

kachkaev commented Dec 3, 2017

@txg1550759 were you able to find what the problem was? I also have a admin.conf that was generated for me by kubeadm, but it does not work. I can do kubectl get pods etc. with it though.

@floreks
Copy link
Member

floreks commented Dec 3, 2017

https://github.com/kubernetes/dashboard/wiki/Access-control#kubeconfig

This method of logging in is provided for convenience. Only authentication options specified by --authentication-mode flag are supported in kubeconfig file. In case it is configured to use any other way, error will be shown in Dashboard. External identity providers or certificate-based authentication are not supported at this time.

@kachkaev
Copy link

kachkaev commented Dec 3, 2017

A few thoughts for those who might end up here from search. The reason why my ~/.kube/config yaml file did not work in dashboard 1.8 was because it did not contain a token or a username with password. Searching for Not enough data to create auth info structure in the dashboard's source code clearly shows that this is what was expected in a file you upload. The same was in @txg1550759's case.

The yaml file I was trying to authenticate with came from /etc/kubernetes/admin.conf, which was generated by kubeadm 1.7 back in July. I saw other admin files since then that were generated by kops – these did contain a password if I remember correctly. So perhaps the lack of a token or a password in kubeconfig is some kind of a legacy thing or a kubeadm-specific thing, not sure.

I ran kubectl get clusterRoles and kubectl get clusterRoleBindings and saw an item called cluster-admin in both. However unlike other role bindings (e.g. tiller-cluster-rule), the cluster-admin one referred to something called apiGroup instead of ServiceAccount (to which a token can belong). Check out the difference in the bottom of each output:

kubectl edit ClusterRoleBinding tiller-cluster-rule

   ↓

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  creationTimestamp: 2017-07-23T16:34:40Z
  name: tiller-cluster-rule
  resourceVersion: "2328"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/tiller-cluster-rule
  uid: d3249b5d-6fc4-1227-8920-5250000643887
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:                              # ← here
- kind: ServiceAccount
  name: tiller
  namespace: kube-system
kubectl edit ClusterRoleBinding cluster-admin

   ↓

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  creationTimestamp: 2017-07-23T16:08:15Z
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: cluster-admin
  resourceVersion: "118"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin
  uid: 2224ba97-6fc4-1227-8920-5250000643887
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:                              # ← here
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:masters

This suggests that my cluster probably does not have a dedicated ‘root’ service account per se. That's why ~/.kube/config works for kubectl without having a token or a username and password in it, but does not work for the dashboard.

Nevertheless, I could get into the dashboard by authenticating myself as other ServiceAccounts and this worked well. Depending on the privileges of a service account I picked, the dashboard was giving me access to different resources, which is great! Here's an example of getting a token for the service account called tiller to authenticate (you'll have it if you use helm):

kubectl describe serviceaccount tiller -n kube-system

   ↓

Name:         tiller
Namespace:    kube-system
Labels:       <none>
Annotations:  <none>

Image pull secrets:  <none>

Mountable secrets:   tiller-token-854dx

Tokens:              tiller-token-854dx
kubectl describe secret tiller-token-854dx -n kube-system

   ↓

...
Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token: ×××

Copy token ××× and paste it into the dashboard's login screen.

The useful thing about the tiller service account in my case is that it's bound to cluster-admin cluster role (see the yaml above). This is because tiller needs to be able to launch pods, set up ingress rules, edit secrets, etc. Such a role binding is not the case for any cluster, but it may be a default thing in the simple setups. If that's the case, using tiller's token in the dashboard makes you the ‘root‘ user, because this implies that you have the cluster-admin cluster role.

Finally, my upgrade from dashboard 1.6 to 1.8 can be considered as finished! 😄


All this RBAC stuff is way too advanced for me to be honest, so it can be that I‘ve done something wrong. I guess that a proper solution would be to create a new service account and a new role binding from scratch and then use that token in the dashboard instead of the tiller's one. However I'll probably stay with my tiller's token for some time until I get energy for switching to a proper solution. Could anyone please confirm or correct my thoughts?

@maciaszczykm
Copy link
Member

I guess that a proper solution would be to create a new service account and a new role binding from scratch and then use that token in the dashboard instead of the tiller's one.

Correct. It is recommended way of handling it.

However I'll probably stay with my tiller's token for some time until I get energy for switching to a proper solution. Could anyone please confirm or correct my thoughts?

If it works for you it is fine. You should be aware, that everyone with cluster-admin role is able to perform critical changes within cluster. That's why it should be accessible only to small group of people.

@floreks
Copy link
Member

floreks commented Dec 4, 2017

@kachkaev I'm really glad that you actually took time to try and find a solution on your own :) I can help you fill out the gaps.

A few thoughts for those who might end up here from search. The reason why my ~/.kube/config yaml file did not work in dashboard 1.8 was because it did not contain a token or a username with password. Searching for Not enough data to create auth info structure in the dashboard's source code clearly shows that this is what was expected in a file you upload. The same was in @txg1550759's case.

The yaml file I was trying to authenticate with came from /etc/kubernetes/admin.conf, which was generated by kubeadm 1.7 back in July. I saw other admin files since then that were generated by kops – these did contain a password if I remember correctly. So perhaps the lack of a token or a password in kubeconfig is some kind of a legacy thing or a kubeadm-specific thing, not sure.

Usually cluster provisioners like kubeadm or kops configure kubeconfig file to use certificate-based authentication. This is ok for binary such as kubectl because it can establish secure connection with API server and be authenticated based on your certificates. This way, however, will not work for web application as your private key should never leave your computer and web app cannot establish such connection as kubectl.

That is why we have to rely on other methods of authenticating user offered by kubernetes, such as token-based authentication or basic auth (login and password). Second one only works when authorization mode ABAC is enabled and you have some additional arguments passed to apiserver. None of these methods is deprecated. These are just different ways to authenticate user and all of them can be specified in kubeconfig file.

There are even more authentication options. That is why it is highly recommended to read our documentation first, before using Dashboard. In Introduction section of our Access control guide, we provide links that should help users get rid of any doubts how Dashboard works. Especially link to Authenticating in Kubernetes.

https://github.com/kubernetes/dashboard/wiki/Access-control#introduction

All this RBAC stuff is way too advanced for me to be honest, so it can be that I‘ve done something wrong. I guess that a proper solution would be to create a new service account and a new role binding from scratch and then use that token in the dashboard instead of the tiller's one. However I'll probably stay with my tiller's token for some time until I get energy for switching to a proper solution. Could anyone please confirm or correct my thoughts?

RBAC is generally quite a big topic in kubernetes. I'd recommend reading Using RBAC Authorization guide to find out how to create and configure "user" with required permissions. We are really trying to keep our documentation clear for users and provide all necessary links so they can find out how everything works and how to work with not only Dashboard, but also Kubernetes.

In case you have some more doubts or questions you can ask me. I'll try to help if I can.

@kachkaev
Copy link

kachkaev commented Dec 4, 2017

Thank you for replies @maciaszczykm and @floreks! RBAC's getting slightly clearer over time, thanks to the docs that are constantly improving. I really like the fact that, if installed correctly, the dashboard no longer has admin privileges and so it is possible to give different team members varying permissions. Totally agree that if everyone has kube-admin role, things can go wrong pretty quickly!

When I ran dashboard 1.6 at https://dashboard.example.com/, I was adding basic auth to the ingress rule to protect the dashboard from strangers – anyone could become an admin of my cluster otherwise. After upgrading to 1.8 with your official yaml, it seems that running https://dashboard.example.com/ is now safe even without any basic auth in ingress. If a hacker gets to that domain, they'll only be able to know about the existence of the k8s cluster, but not perform any read/write operations on it. Only authenticated token bearers will be able to get the details about the existing resources and change them (as long as a token represents a ServiceAccount with enough privileges). Am a right about this?

I understand that the best option is to keep https://dashboard.example.com/ available only behind a firewall, but am still curious if exposing this resource publicly is OK for simple clusters with non-critical personal projects. A friend of mine has got an opposite opinion, we need to stop our dispute once and for all 😄

@floreks
Copy link
Member

floreks commented Dec 4, 2017

Only authenticated token bearers will be able to get the details about the existing resources and change them (as long as a token represents a ServiceAccount with enough privileges). Am a right about this?

Token does not have to be necessarily tied to the Service Account. It can be generated by OIDC or any external identity provider. As long as API server accepts it, then it is ok. There are more ways of configuring and getting "correct" token to log in.

I understand that the best option is to keep https://dashboard.example.com/ available only behind a firewall, but am still curious if exposing this resource publicly is OK for simple clusters with non-critical personal projects. A friend of mine has got an opposite opinion, we need to stop our dispute once and for all.

By default Dashboard has now very little permissions that do not impose any security risks anymore (https://github.com/kubernetes/dashboard/wiki/Access-control#v18). I do not see any security threats in exposing Dashboard publicly if everything is configured properly. There is no way to escalate privileges right now, so exposing Dashboard should be no different than exposing API server (secured, with RBAC enabled).

@maciaszczykm
Copy link
Member

By default Dashboard has now very little permissions that do not impose any security risks anymore (https://github.com/kubernetes/dashboard/wiki/Access-control#v18). I do not see any security threats in exposing Dashboard publicly if everything is configured properly. There is no way to escalate privileges right now, so exposing Dashboard should be no different than exposing API server (secured, with RBAC enabled).

Settings should be restricted like few other views/actions. We are currently working on this.

@huangjiasingle
Copy link

huangjiasingle commented Jan 14, 2018

@floreks it's should support the kubeconfig which created by the kubeadm, update the code of this struct and methed:

type userInfo struct {
	Token    string `yaml:"token"`
	Username string `yaml:"username"`
	Password string `yaml:"password"`
}


func (self *kubeConfigAuthenticator) getAuthInfo(info userInfo) (api.AuthInfo, error) {
	if len(info.Token) == 0 && (len(info.Password) == 0 || len(info.Username) == 0) {
		return api.AuthInfo{}, errors.New("Not enough data to create auth info structure.")
	}

	result := api.AuthInfo{}
	if self.authModes.IsEnabled(authApi.Token) {
		result.Token = info.Token
	}

	if self.authModes.IsEnabled(authApi.Basic) {
		result.Username = info.Username
		result.Password = info.Password
	}

	return result, nil
}

@floreks
Copy link
Member

floreks commented Jan 14, 2018

It was already explained many times why certificate-based authentication is not supported. It will not be added for security reasons as the private key should never leave user's computer.

@cdenneen
Copy link

@floreks Thanks... this is a much better way of authenticating but I think the reason people have been ending up here has to do with the wiki docs for authentication for dashboard. It goes through the different methods of authentication but probably should at least provide a basic "Usage" section for (if you are new to kubernetes and you just installed dashboard, do the following steps in order to run dashboard 1. create service-account 2. get token 3. put token in your ~/.kube/config in this section of the yaml).

@cdenneen
Copy link

Maybe makes sense to also provide sample dashboard proxy setup since /ui no longer works?

@floreks
Copy link
Member

floreks commented Jan 18, 2018

@cdennen creating sample user and getting token is also described in the docs.

https://github.com/kubernetes/dashboard/wiki/Creating-sample-user

Wiki describes few ways of accessing Dashboard (kubectl proxy, NodePort, directly through api server). We will not provide ingress method as it is very custom and user has to decide what tools to use and how to configure them to expose an application.

@djsd123
Copy link

djsd123 commented Jan 29, 2018

Fyi, I have both admin certs and oidc auth(ID and Refresh Token) configured in my kubeconfig and still get Not enough data to create auth info structure. Rather than a file! Akubectl context may be the answer??? Will try and look into it if I ever get the time but for now, pasting an ID token appears to be the way to go.

@floreks
Copy link
Member

floreks commented Jan 29, 2018

Current context of kubeconfig file has to be set to the context using token.

@cristipp
Copy link

cristipp commented Feb 7, 2018

@floreks Thanks for the pointers. We've used https://github.com/kubernetes/dashboard/wiki/Access-control#getting-token-with-kubectl.

I would like to kindly point out that the workflow is not quite obvious. Repurposing a token randomly picked from a list of service seems rather arbitrary. Would there a chance to package this as a feature of kubectl. Something like kubectl config get-bearer-token.

@abrahamrhoffman
Copy link

abrahamrhoffman commented Feb 14, 2018

Not really sure why no one is posting this in the docs... Use the 'clusterrole-aggregation-controller' token to access your dashboard as 'root':

kubectl -n kube-system describe secrets \
   `kubectl -n kube-system get secrets | awk '/clusterrole-aggregation-controller/ {print $1}'` \
       | awk '/token:/ {print $2}'

Just kind of silly to not include a root account as a part of the dashboard deployment.

screenshot from 2018-02-14 11 48 02

@floreks
Copy link
Member

floreks commented Feb 14, 2018

  1. The existence of this role is not 100% certain. It depends on the way cluster is provisioned.
  2. Using SA tokens to log in is only one way to log in. Any token acceptable by apiserver will work. This includes configuring apiserver with some oidc provider and using token acceptable by this provider to log in.
  3. It is users decision what privileges to grant to the "user" used to access Dashboard.

Bottom line, mentioning arbitrary chosen SA that might exist, but not necessarily in the cluster is never a good idea. That is why we have given only an example of how to log in using ANY existing SA.

@abrahamrhoffman
Copy link

abrahamrhoffman commented Feb 14, 2018

I get that there are a lot of different ways to connect, it's just unintuitive. I think the quickstart docs should call out at least one example of how to connect with a 'root' account. Taking hours of searching to find one login method makes people want to quit.

@linux17kartik
Copy link

linux17kartik commented Feb 19, 2018

Hi everyone, thanks for the discussion , I got spent four days, after that I found the solution for the same:
Issue :
kubectl access error

Solution 👍

  • open file at : vim ~/.kube/config
  • Search for
    access-token:
    copy the token and past in the kube dashboard console .
    It worked for me.
    image

If you con contact me in person : linux.kartik@gmail.com

@floreks
Copy link
Member

floreks commented Feb 19, 2018

@abrahamrhoffman You mean being able to access Dashboard as root like described in here? https://github.com/kubernetes/dashboard/wiki/Access-control#admin-privileges

@christoph-daehne
Copy link

christoph-daehne commented Feb 19, 2018

It was already explained many times why certificate-based authentication is not supported. It will not be added for security reasons as the private key should never leave user's computer.
@floreks

Do you consider it safe to use the Dashboard UI with certificate–authentication when it runs on the user's machine? I used to run it like that and just assumed to certificate to stay on my machine.

EDIT: No it didn't. :)
EDIT2: Meaning: the certificate did not leave the machine, apparently it has not been used at all.

@skurfuerst
Copy link

It would be cool if when using kubectl proxy, the API server would pass on the information about the logged in user; and the kubernetes dashboard could use this :-)

All the best,
Sebastian

@floreks
Copy link
Member

floreks commented Feb 19, 2018

It would be cool if when using kubectl proxy, the API server would pass on the information about the logged in user; and the kubernetes dashboard could use this :-)

Yes it would be great and we are still waiting for such API. Unfortunately I don't have any information from core when this will be added.

Do you consider it safe to use the Dashboard UI with certificate–authentication when it runs on the user's machine? I used to run it like that and just assumed to certificate to stay on my machine.

I'm not sure I understand the use case. Dashboard does not support certificate based authentication in any way. The only supported ways are token-based auth or using basic auth (username & password).

@abrahamrhoffman
Copy link

@floreks - that's great documentation!

A lot of individual's workflows look like:

  • Google 'kubernetes dashboard'
  • Hit the Kubernetes Dashboard GitHub README.md
  • Click 'installation' under Documentation
  • Installation with kubectl create -f ...
  • Wonder how to connect to the dashboard

Could you please include information and link on that page? https://github.com/kubernetes/dashboard/wiki/Installation

@floreks
Copy link
Member

floreks commented Feb 20, 2018

@abrahamrhoffman

Accessing Dashboard is a different topic and is available on the right side menu under Accessing Dashboard section.
zrzut ekranu z 2018-02-20 13-37-54

Direct link is alaso available right next to Installation link under Documentation section in our README.md.
zrzut ekranu z 2018-02-20 13-40-28

@linux17kartik
Copy link

linux17kartik commented Feb 22, 2018 via email

@submarin76
Copy link

submarin76 commented Mar 16, 2018

I am trying to access to the dashboard using kubeconfig file.
Do we have a sample kubeconfig file somewhere in the documentation? I llooked and I don't found any.
It doesn't accept kubeconfig file generated by kubeadm.

@floreks
Copy link
Member

floreks commented Mar 16, 2018

https://github.com/kubernetes/dashboard/wiki/Access-control#kubeconfig

@submarin76
Copy link

submarin76 commented Mar 19, 2018

Please correct me If I am wrong:
if --authentication-mode=basic is not provided.:

  • username and password are indicatif, not functional.
  • most important is the structure and the token.

I figured out that sample kubeconfig file looks like following:


apiVersion: v1
kind: Config
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
preferences: {}
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
    username: test123
    password: test123
    token: ...

token or username/password choice depends on
--authentication-mode=

@floreks
Copy link
Member

floreks commented Mar 19, 2018

username and password are indicatif, not functional

https://github.com/kubernetes/dashboard/wiki/Access-control#basic

most important is the structure and the token.
I figured out that sample kubeconfig file looks like following:

https://github.com/kubernetes/dashboard/wiki/Access-control#kubeconfig

Structure of kubeconfig file can be seen in official documentation: https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/

I think it is not recommended to configure multiple auth option for a single user...

@jirkadanek
Copy link

jirkadanek commented Mar 23, 2018

From reading the abovementioned documentation

kubectl create serviceaccount jdanek
kubectl get serviceaccount jdanek -o yaml
kubectl describe secret jdanek-token-zbt5x
kubectl create rolebinding jdanek-admin --clusterrole=cluster-admin --serviceaccount=default:jdanek

I am the one who has the power over the cluster now ))

edit: no, not really, I needed clusterrolebinding there

@stealthybox
Copy link
Member

Can we print a more clear error message about Certificate Auth not being supported?

"Not enough data to create auth info structure" implies to the user that the kube config is deficient.

Additionally, it might be useful to state upfront that certificate auth is not supported before the user selects a kubeconfig.

@kcd83
Copy link

kcd83 commented May 1, 2018

Yeah this all makes sense, could we do this?

	if client-key-data
	    return api.AuthInfo{}, errors.New("Client certificate is not a valid authentication method for the dashboard")
	else
	    return api.AuthInfo{}, errors.New("Not enough data to create auth info structure.")

@cheld
Copy link
Contributor

cheld commented May 1, 2018

It seems to be a small change. Can you create a PR for it? I will merge

@VampireDaniel
Copy link

I dont see the correct answer from the internet nor from the official document. here is my research. may this could help you.

First, you need to have a account(ideally you could use the build-in account, kubectl get clusterroles to get all build-in account, very recommend to use edit rather than admin)

Then, type this command to get the token which would be used in kubeconfig
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep edit | awk '{print $1}')

Finally, the structure for the kubeconfig should be like this

current-context: [account-name]
contexts:
- context:
    user: [account-name]
  name: [account-name]
users:
- name: [account-name]
  user:
    token: [token value which you get at step2]

those works for me and here is my dashboard versionkubernetes-dashboard-amd64:v1.10.0

@questionmorc
Copy link

@VampireDaniel This is probably the answer everyone is looking for. The documentation doesn't explain how to add the token to the kube config.

@philippludwig
Copy link

@VampireDaniel That command prints tons of tokens, but which is the correct one?

@mydockergit
Copy link

mydockergit commented Nov 12, 2018

I used https://github.com/kubernetes/dashboard/wiki/Creating-sample-user but edit the ClusterRoleBinding to:

kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system
EOF

This gave kubernetes-dashboard the admin permissions.
Then I took its token with:

kubectl -n kube-system describe secrets \
   `kubectl -n kube-system get secrets | awk '/kubernetes-dashboard/ {print $1}'` \
       | awk '/token:/ {print $2}'

And used it. It looks like it stuck on the login, after 20 seconds I pressed "skip" and then I have the permissions.

To run on the dashboard I used:

kubectl proxy --address=0.0.0.0 --accept-hosts='.*'
Starting to serve on [::]:8001

Which is really not recommended because everyone can access it.
And then used:
http://<master_ip>:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login

@jayunit100
Copy link
Member

jayunit100 commented Jan 15, 2020

Which is really not recommended because everyone can access it.
And then used:
http://<master_ip>:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login

so, wait - are you running kubectl proxy on the apiserver in the above comment--- as opposed to running it from a locally logged in machine ? this option won't work for people that don't have, SSH example to their masters. Is the master_ip necessary for this to work? if so, how is it any different then a NodePort based solution?

Thanks for providing this workaround btw... just clarifying !

@junsionzhang
Copy link

junsionzhang commented Feb 7, 2020

From reading the abovementioned documentation

kubectl create serviceaccount jdanek
kubectl get serviceaccount jdanek -o yaml
kubectl describe secret jdanek-token-zbt5x
kubectl create rolebinding jdanek-admin --clusterrole=cluster-admin --serviceaccount=default:jdanek

I am the one who has the power over the cluster now ))

edit: no, not really, I needed clusterrolebinding there

what is the namespace for your dashboard , mine is kube-system, I did as follows but with no luck

kubectl create serviceaccount junsion -n kube-system
kubectl get serviceaccounts -n kube-system junsion -o yaml
kubectl get secrets junsion-token-mzpfr -n kube-system
kubectl create clusterrolebinding junsion-binding --clusterrole=cluster-admin --serviceaccount=kube-system:junsion

the cluster is deployed by kubespray and when using the token to log in ,the error is as follows:
url: https://172.16.250.11:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
error: unkown server error 404

@kachkaev
Copy link

kachkaev commented Sep 23, 2020

@abrahamrhoffman’s solution no longer worked for me in k8s 1.19.2, probably because clusterrole-aggregation-controller is no longer an admin (I’m not an expert in k8s, so this is just a speculation).

What helped was the creation of a special service account with admin privileges and then printing its token:

kubectl --namespace=kube-system create serviceaccount cluster-admin-dashboard-sa

kubectl --namespace=kube-system create clusterrolebinding cluster-admin-dashboard-sa \
  --clusterrole=cluster-admin \
  --serviceaccount=kube-system:cluster-admin-dashboard-sa

kubectl --namespace=kube-system describe secrets \
  $(kubectl --namespace=kube-system get secrets | awk '/cluster-admin-dashboard-sa-token/ {print $1}') \
  | awk '/token:/ {print $2}'

UPD: Turns out that the Dashboard docs also include a recipe for creating the service account:
https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md

@candlerb
Copy link

Something like kubectl config get-bearer-token.

That sounds like a good idea to me; the issued tokens could have a relatively small time limit.

Otherwise, I think the "right" way of doing this would be with OpenID. It would be nice if the dashboard itself would redirect to your openid provider, where you could authenticate, and redirect back; but AFAIK you currently need something like kubelogin.

@imranrazakhan
Copy link

@candlerb

I think the "right" way of doing this would be with OpenID. It would be nice if the dashboard itself would redirect to your openid provider, where you could authenticate, and redirect back;

I agree, that will make things much easier.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests