Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dashboard pod cannot run on kubeadm #1578

Closed
Masber opened this issue Jan 19, 2017 · 44 comments
Closed

dashboard pod cannot run on kubeadm #1578

Masber opened this issue Jan 19, 2017 · 44 comments
Assignees

Comments

@Masber
Copy link

Masber commented Jan 19, 2017

Issue details

I can't make the dashboard to run. I am using a fresh kubeadm installation + calico.

Kubernetes version: 1.5.1
Operating system: Centos7
Steps to reproduce

kubeadm init
kubeadm join ... --> join a new node
kubectl apply -f http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/kubeadm/calico.yaml
kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml

Observed result
[root@kub1 ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                       READY     STATUS             RESTARTS   AGE
kube-system   calico-etcd-xnb3q                          1/1       Running            0          44m
kube-system   calico-node-7ccs2                          2/2       Running            0          44m
kube-system   calico-node-zgww7                          2/2       Running            0          44m
kube-system   calico-policy-controller-807063459-r5k6f   1/1       Running            0          44m
kube-system   dummy-2088944543-xx6hb                     1/1       Running            0          52m
kube-system   etcd-kub1.localhost                        1/1       Running            0          51m
kube-system   kube-apiserver-kub1.localhost              1/1       Running            0          52m
kube-system   kube-controller-manager-kub1.localhost     1/1       Running            0          52m
kube-system   kube-discovery-1769846148-2znmc            1/1       Running            0          52m
kube-system   kube-dns-2924299975-mjcll                  4/4       Running            0          52m
kube-system   kube-proxy-393q6                           1/1       Running            0          52m
kube-system   kube-proxy-lhzpw                           1/1       Running            0          52m
kube-system   kube-scheduler-kub1.localhost              1/1       Running            0          52m
kube-system   kubernetes-dashboard-3203831700-sz5kr      0/1       CrashLoopBackOff   11         39m
[root@kub1 ~]#
[root@kub1 ~]#
[root@kub1 ~]#
[root@kub1 ~]# kubectl describe pod kubernetes-dashboard-3203831700-sz5kr -n kube-system
Name:           kubernetes-dashboard-3203831700-sz5kr
Namespace:      kube-system
Node:           kub2.localhost/192.168.20.11
Start Time:     Fri, 20 Jan 2017 02:00:57 +1100
Labels:         app=kubernetes-dashboard
                pod-template-hash=3203831700
Status:         Running
IP:             192.168.99.129
Controllers:    ReplicaSet/kubernetes-dashboard-3203831700
Containers:
  kubernetes-dashboard:
    Container ID:       docker://61448d97cbbcea7900def2f9252b186ba09b3bfde5fcc761fd5a69d30ef9e63e
    Image:              gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1
    Image ID:           docker-pullable://gcr.io/google_containers/kubernetes-dashboard-amd64@sha256:46a09eb9c611e625e7de3fcf325cf78e629d002e57dc80348e9b0638338206b5
    Port:               9090/TCP
    State:              Waiting
      Reason:           CrashLoopBackOff
    Last State:         Terminated
      Reason:           Error
      Exit Code:        1
      Started:          Fri, 20 Jan 2017 02:39:20 +1100
      Finished:         Fri, 20 Jan 2017 02:39:50 +1100
    Ready:              False
    Restart Count:      11
    Liveness:           http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Volume Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-g8c6f (ro)
    Environment Variables:      <none>
Conditions:
  Type          Status
  Initialized   True
  Ready         False
  PodScheduled  True
Volumes:
  default-token-g8c6f:
    Type:       Secret (a volume populated by a Secret)
    SecretName: default-token-g8c6f
QoS Class:      BestEffort
Tolerations:    dedicated=master:Equal:NoSchedule
Events:
  FirstSeen     LastSeen        Count   From                            SubObjectPath                           Type            Reason          Message
  ---------     --------        -----   ----                            -------------                           --------        ------          -------
  40m           40m             1       {default-scheduler }                                                    Normal          Scheduled       Successfully assigned kubernetes-dashboard-3203831700-sz5kr to kub2.localhost
  40m           40m             1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal          Created         Created container with docker id a49cd03e9777; Security:[seccomp=unconfined]
  40m           40m             1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal          Started         Started container with docker id a49cd03e9777
  39m           39m             1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal          Created         Created container with docker id ce3d37ca7822; Security:[seccomp=unconfined]
  39m           39m             1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal          Started         Started container with docker id ce3d37ca7822
  39m           39m             2       {kubelet kub2.localhost}                                                Warning         FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-3203831700-sz5kr_kube-system(15a70355-de58-11e6-b7ee-0050568da433)"

  38m   38m     1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Started         Started container with docker id cd022645360a
  38m   38m     1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Created         Created container with docker id cd022645360a; Security:[seccomp=unconfined]
  37m   37m     3       {kubelet kub2.localhost}                                                Warning FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-3203831700-sz5kr_kube-system(15a70355-de58-11e6-b7ee-0050568da433)"

  37m   37m     1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Created         Created container with docker id 62be00de3036; Security:[seccomp=unconfined]
  37m   37m     1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Started         Started container with docker id 62be00de3036
  36m   36m     4       {kubelet kub2.localhost}                                                Warning FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-3203831700-sz5kr_kube-system(15a70355-de58-11e6-b7ee-0050568da433)"

  35m   35m     1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Created         Created container with docker id 8375b55999c9; Security:[seccomp=unconfined]
  35m   35m     1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Started         Started container with docker id 8375b55999c9
  35m   33m     7       {kubelet kub2.localhost}                                                Warning FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-3203831700-sz5kr_kube-system(15a70355-de58-11e6-b7ee-0050568da433)"

  33m   33m     1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Created         Created container with docker id abf92039a988; Security:[seccomp=unconfined]
  33m   33m     1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Started         Started container with docker id abf92039a988
  33m   30m     14      {kubelet kub2.localhost}                                                Warning FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-3203831700-sz5kr_kube-system(15a70355-de58-11e6-b7ee-0050568da433)"

  30m   30m     1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Created         Created container with docker id 019b1fa3d8f1; Security:[seccomp=unconfined]
  30m   30m     1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Started         Started container with docker id 019b1fa3d8f1
  24m   24m     1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Created         Created container with docker id d787df99e676; Security:[seccomp=unconfined]
  24m   24m     1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Started         Started container with docker id d787df99e676
  19m   19m     1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Started         Started container with docker id d7c318d46200
  19m   19m     1       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Created         Created container with docker id d7c318d46200; Security:[seccomp=unconfined]
  39m   18m     2       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Warning Unhealthy       Liveness probe failed: Get http://192.168.99.129:9090/: dial tcp 192.168.99.129:9090: getsockopt: connection refused
  40m   2m      12      {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Pulling         pulling image "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1"
  40m   2m      12      {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Pulled          Successfully pulled image "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1"
  13m   1m      3       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Started         (events with common reason combined)
  13m   1m      3       {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Normal  Created         (events with common reason combined)
  29m   5s      125     {kubelet kub2.localhost}                                                Warning FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-3203831700-sz5kr_kube-system(15a70355-de58-11e6-b7ee-0050568da433)"

  39m   5s      155     {kubelet kub2.localhost}        spec.containers{kubernetes-dashboard}   Warning BackOff Back-off restarting failed docker container
[root@kub1 ~]#
[root@kub1 ~]#
[root@kub1 ~]#
[root@kub1 ~]#
[root@kub1 ~]# kubectl logs kubernetes-dashboard-3203831700-sz5kr -n kube-system
Using HTTP port: 9090
Creating API server client for https://10.96.0.1:443
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to the troubleshooting guide for more information: https://github.com/kubernetes/dashboard/blob/master/docs/user-guide/troubleshooting.md
[root@kub1 ~]#
Expected result

dashboard --> Running

Comments
@Masber
Copy link
Author

Masber commented Jan 19, 2017

More details:

sorry forgot to add my authentication to the Kubernetes API Server troubleshooting:

certificate looks ok

[root@kub1 ~]# kubectl exec test-701078429-f2116 ls /var/run/secrets/kubernetes.io/serviceaccount/
ca.crt
namespace
token

List services

[root@kub1 ~]# kubectl get services
NAME         CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   10.96.0.1    <none>        443/TCP   8h

check connectivity to API server

[root@kub1 ~]# kubectl exec test-701078429-f2116  -- curl -k https://10.96.0.1
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0^C

[root@kub1 ~]# TOKEN_VALUE=$(kubectl exec test-701078429-f2116 -- cat /var/run/secrets/kubernetes.io/serviceaccount/token)
[root@kub1 ~]# echo $TOKEN_VALUE
eyJhbGciOiJS...90UQeSI1QSuw

[root@kub1 ~]# kubectl exec test-701078429-f2116  -- curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H  "Authorization: Bearer $TOKEN_VALUE" https://10.96.0.1
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:--  0:00:07 --:--:--     0^C
[root@kub1 ~]#

It looks to me that pod can't connect tot he APIServer.

@Masber Masber closed this as completed Jan 19, 2017
@Masber Masber reopened this Jan 19, 2017
@Masber
Copy link
Author

Masber commented Jan 21, 2017

nobody to help?

@ianlewis
Copy link
Contributor

Dashboard includes tolerations for running on the master node. What node is dashboard running on? Is the API server accessible on 10.96.0.1:443 from the master node?

@Masber
Copy link
Author

Masber commented Jan 29, 2017

@ianlewis please see below:

[root@kub1 ~]# kubectl get nodes
NAME             STATUS         AGE
kub1.localhost   Ready,master   6d
kub2.localhost   Ready          6d

So dashboard is not running on the master

[root@kub1 ~]# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                       READY     STATUS             RESTARTS   AGE       IP               NODE
kube-system   kubernetes-dashboard-3203831700-mnx2w      0/1       CrashLoopBackOff   473        1d        192.168.99.136   kub2.localhost

I can access 10.96.0.1:443 from the master node:

[root@kub1 ~]# telnet 10.96.0.1 443
Trying 10.96.0.1...
Connected to 10.96.0.1.

@ianlewis
Copy link
Contributor

ianlewis commented Feb 1, 2017

Can you access that IP from kub2 where the dashboard is running?

@Masber
Copy link
Author

Masber commented Feb 1, 2017

kub2 can access the kubernetes service IP on port 443

[root@kub2 ~]# telnet 10.96.0.1 443
Trying 10.96.0.1...
Connected to 10.96.0.1.

@ianlewis
Copy link
Contributor

ianlewis commented Feb 1, 2017

I'm out of ideas. I don't know why it might not be working. Even still it's very likely network or Kubernetes core related rather than a bug in the Dashboard.

You should try to actually send a request to the API server and make sure you can get a response. It may be that the API server cannot use etcd.

@j0nesin
Copy link

j0nesin commented Feb 1, 2017

Same issue. Latest kubeadm deploy and dashboard can't connect. Dashboard 1.5.0 works on previous kubeadm deploy, but fails on latest kubeadm. Same for Dashboard 1.5.1.

Just realized my env has kubeadm 1.6 alpha and kube 1.5.1. That could be the problem.

[centos@ip-10-0-10-10 ~]$ yum list installed | grep kube
kubeadm.x86_64                   1.6.0-0.alpha.0.2074.a092d8e0f95f52 @kubernetes
kubectl.x86_64                   1.5.1-0                             @kubernetes
kubelet.x86_64                   1.5.1-0                             @kubernetes
kubernetes-cni.x86_64            0.3.0.1-0.07a8a2                    @kubernetes

@Masber
Copy link
Author

Masber commented Feb 2, 2017

@j0nesin as you using flannel or calico?

@j0nesin
Copy link

j0nesin commented Feb 2, 2017

weave

@Masber
Copy link
Author

Masber commented Feb 2, 2017

@j0nesin @ianlewis I just realised that dashboard works fine if it is running on the same node as the apiserver.

I am using calico

@ChristopherQian
Copy link

@Masber I have same issue, kubeadmin 1.6.0-0 kube 1.5.2 Dashboard 1.5.1.

@ading1977
Copy link

I have the same issue too,

[root@mdinglin09 .kube]# yum list installed | grep kube
kubeadm.x86_64 1.6.0-0.alpha.0.2074.a092d8e0f95f52
kubectl.x86_64 1.5.1-0 installed
kubelet.x86_64 1.5.1-0 installed
kubernetes-cni.x86_64 0.3.0.1-0.07a8a2 installed

[root@mdinglin09 .kube]# docker logs -f 66db8728f715
Using HTTP port: 9090
Creating API server client for https://10.96.0.1:443
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to the troubleshooting guide for more information: https://github.com/kubernetes/dashboard/blob/master/docs/user-guide/troubleshooting.md

@wuzesheng
Copy link

I have the same issue too, anyone can help?

[root@bongmi-k8s-master ~]# yum list installed | grep kube
kubeadm.x86_64 1.6.0-0.alpha.0.2074.a092d8e0f95f52 @kubernetes
kubectl.x86_64 1.5.1-0 @kubernetes
kubelet.x86_64 1.5.1-0 @kubernetes
kubernetes-cni.x86_64 0.3.0.1-0.07a8a2 @kubernetes

[root@bongmi-k8s-master ~]# kubectl logs -n kube-system kubernetes-dashboard-4027881251-vpnkq
Using HTTP port: 9090
Creating API server client for https://10.96.0.1:443
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to the troubleshooting guide for more information: https://github.com/kubernetes/dashboard/blob/master/docs/user-guide/troubleshooting.md

@wuzesheng
Copy link

BTW: I use flannel network

@wuzesheng
Copy link

@Masber Yes, you're right.

@donutloop
Copy link

i have the same issue too,

kubectl logs --namespace=kube-system kubernetes-dashboard-3615790904-zmnb4
Using HTTP port: 9090
Creating API server client for https://10.96.0.1:443
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to the troubleshooting guide for more information: https://github.com/kubernetes/dashboard/blob/master/docs/user-guide/troubleshooting.md

I use flannel network as well

@wuzesheng
Copy link

Anyone can help on this issue?

@cheld
Copy link
Contributor

cheld commented Mar 9, 2017

Ok, I am going to check it, now

@cheld
Copy link
Contributor

cheld commented Mar 9, 2017

So, I used the installation guide on two, clean ubuntu 16 VMs.

apt list --installed | grep kube

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

kubeadm/kubernetes-xenial,now 1.6.0-alpha.0-2074-a092d8e0f95f52-00 amd64 [installed]
kubectl/kubernetes-xenial,now 1.5.2-00 amd64 [installed]
kubelet/kubernetes-xenial,now 1.5.2-00 amd64 [installed]
kubernetes-cni/kubernetes-xenial,now 0.3.0.1-07a8a2-00 amd64 [installed]

The first attempt failed. Some pods could not be scheduled because a single core VM could not satisfy the CPU requests. I increased to two CPU cores, but the master did not start-up properly. A bit strange. My observation was similar to: kubernetes/kubernetes#33671

In the second attempt I started from scretch with a two core VM and everything worked smoothly. It did not matter if Dashboard was scheduled on master or node. Both worked

So, I could not find anything Dashboard related.

@cheld
Copy link
Contributor

cheld commented Mar 9, 2017

I used Weave as suggested in the initial issue description

@tuannvm
Copy link

tuannvm commented Mar 15, 2017

+1

@j0nesin
Copy link

j0nesin commented Mar 15, 2017

Things started working for me, and I suspect it was due to using the latest kubernetes-dashboard.yaml with the annotation for running the dashboard on the master.

https://github.com/kubernetes/dashboard/blob/master/src/deploy/kubernetes-dashboard.yaml

@cheld
Copy link
Contributor

cheld commented Mar 15, 2017

@j0nesin I tested deployment on master AND worker node. I could not observe any difference.

@vhosakot
Copy link

vhosakot commented Mar 17, 2017

I see this issue too with kubernetes 1.5.4 and kubernetes-dashboard image version gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.0.

I installed kubeadm referring https://kubernetes.io/docs/getting-started-guides/kubeadm/, and then installed kubernetes-dashboard by doing

kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.6.0/src/deploy/kubernetes-dashboard.yaml

I see the kubernetes-dashboard in CrashLoopBackOff status and the k8s_kubernetes-dashboard.* container on the worker is in Exited state.

Below are the errors. Has anyone successfully installed kubernetes-dashboard on kubeadm?

# kubectl --namespace=kube-system get all
NAME                                                          READY     STATUS             RESTARTS   AGE
po/calico-policy-controller-mqsmh                             1/1       Running            0          4h
po/canal-etcd-tm2rv                                           1/1       Running            0          4h
po/canal-node-3nv2t                                           3/3       Running            0          4h
po/canal-node-5fckh                                           3/3       Running            1          4h
po/canal-node-6zgq8                                           3/3       Running            0          4h
po/canal-node-rtjl8                                           3/3       Running            0          4h
po/dummy-2088944543-09w8n                                     1/1       Running            0          4h
po/etcd-vhosakot-kolla-kube1.localdomain                      1/1       Running            0          4h
po/kube-apiserver-vhosakot-kolla-kube1.localdomain            1/1       Running            2          4h
po/kube-controller-manager-vhosakot-kolla-kube1.localdomain   1/1       Running            0          4h
po/kube-discovery-1769846148-pftx5                            1/1       Running            0          4h
po/kube-dns-2924299975-9m2cp                                  4/4       Running            0          4h
po/kube-proxy-0ndsb                                           1/1       Running            0          4h
po/kube-proxy-h7qrd                                           1/1       Running            1          4h
po/kube-proxy-k6168                                           1/1       Running            0          4h
po/kube-proxy-lhn0k                                           1/1       Running            0          4h
po/kube-scheduler-vhosakot-kolla-kube1.localdomain            1/1       Running            0          4h
po/kubernetes-dashboard-3203962772-mw26t                      0/1       CrashLoopBackOff   11         41m
NAME                       CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
svc/canal-etcd             10.96.232.136    <none>        6666/TCP        4h
svc/kube-dns               10.96.0.10       <none>        53/UDP,53/TCP   4h
svc/kubernetes-dashboard   10.100.254.77    <nodes>       80:30085/TCP    41m
NAME                   DESIRED   SUCCESSFUL   AGE
jobs/configure-canal   1         1            4h
NAME                          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/kube-discovery         1         1         1            1           4h
deploy/kube-dns               1         1         1            1           4h
deploy/kubernetes-dashboard   1         1         1            0           41m
NAME                                 DESIRED   CURRENT   READY     AGE
rs/calico-policy-controller          1         1         1         4h
rs/dummy-2088944543                  1         1         1         4h
rs/kube-discovery-1769846148         1         1         1         4h
rs/kube-dns-2924299975               1         1         1         4h
rs/kubernetes-dashboard-3203962772   1         1         0         41m

# kubectl --namespace=kube-system describe pod kubernetes-dashboard-3203962772-mw26t
  20m    5s    89    {kubelet vhosakot-kolla-kube2.localdomain}                        Warning    FailedSync    Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-3203962772-mw26t_kube-system(67b0d69b-0b47-11e7-8c97-7a2ed4192438)"

# kubectl --namespace=kube-system logs kubernetes-dashboard-3203962772-mw26t
Using HTTP port: 9090
Creating API server client for https://10.96.0.1:443
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to the troubleshooting guide for more information: https://github.com/kubernetes/dashboard/blob/master/docs/user-guide/troubleshooting.md

# docker ps -a | grep -i dash
3c33cf43d5e4        gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.0   "/dashboard --port=90"   54 seconds ago      Exited (1) 22 seconds ago                       k8s_kubernetes-dashboard.9eb4d80e_kubernetes-dashboard-3203962772-mw26t_kube-system_67b0d69b-0b47-11e7-8c97-7a2ed4192438_93520bd4

# docker logs k8s_kubernetes-dashboard.9eb4d80e_kubernetes-dashboard-3203962772-mw26t_kube-system_67b0d69b-0b47-11e7-8c97-7a2ed4192438_93520bd4
Using HTTP port: 9090
Creating API server client for https://10.96.0.1:443
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to the troubleshooting guide for more information: https://github.com/kubernetes/dashboard/blob/master/docs/user-guide/troubleshooting.md

@aaron-trout
Copy link

aaron-trout commented Mar 28, 2017

I am also seeing the same problem on kubeadm.

$ kubectl --namespace kube-system logs kubernetes-dashboard-3203962772-vw6sm --follow
Using HTTP port: 9090
Creating API server client for https://10.96.0.1:443
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to the troubleshooting guide for more information: https://github.com/kubernetes/dashboard/blob/master/docs/user-guide/troubleshooting.md

I don't think this is a problem with the dashboard itself because I cannot curl the apiserver from other pods.

# curl https://10.96.0.1:443/version
curl: (7) Failed to connect to 10.96.0.1 port 443: Operation timed out

@gregorjs
Copy link

+1

@jeroenjacobs79
Copy link

Suffering from the same issue.#1802

@ianlewis ianlewis self-assigned this Apr 14, 2017
@ianlewis
Copy link
Contributor

I'll try to look at this in the next week or two.

@alexlokshin
Copy link

Same issue here. Used kuebadm, looking at the docker logs it seems dashboard doesn't pass the ca cert.

@floreks
Copy link
Member

floreks commented May 8, 2017

It is a kubeadm issue not dashboard. Kubernetes is responsible for mounting certs into the pods. Most logs here are related to cluster networking issue. I am sucessfully running kubeadm based cluster on RaspberryPi and my desktop PC with Ubuntu. Kubeadm has just graduated to beta and some people still have issues with dns and networking.

@floreks floreks closed this as completed May 8, 2017
@aerobiotic
Copy link

aerobiotic commented Sep 15, 2017

If you follow the kubeadm instructions to the letter ... Which means install docker, kubernetes (kubeadm, kubectl, & kubelet), and calico with the Kubeadm hosted instructions ... and your computer nodes have a physical ip address in the range of 192.168.X.X then you will end up with the above mentioned non-working dashboard. This is because the node ip addresses clash with the internal calico ip addresses. To fix, do this during the installation:

During the master node cluster creation step:
export CALICO_IPV4POOL_CIDR=172.16.0.0
kubeadm init --pod-network-cidr=$CALICO_IPV4POOL_CIDR/16

When you install the POD network and have chosen calico:
Download the calico.yaml and patch in the alternate CIDR

wget https://docs.projectcalico.org/v2.5/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml -O calico.yaml

sed -i "s/192.168.0.0/'"$CALICO_IPV4POOL_CIDR"'/g" calico.yaml

kubectl apply -f calico.yaml

@kevinhooke
Copy link

Same issue for me. Followed kubeadm install steps for a master and 2 node cluster on CentOS 7. When adding dashboard, found same issues as here when using flannel, but when recreated cluster with same steps but using Weave, dashboard works. Hope that helps someone narrow down where the issue is.

@donutloop
Copy link

Same issue for me. Followed kubeadm install steps for a master and 2 node cluster on CentOS 7. When adding dashboard, found same issues as here when using flannel, but when recreated cluster with same steps but using Weave, dashboard works. Hope that helps someone narrow down where the issue is.

@kevinhooke can you please post what you did?

@yarsergio
Copy link

It doesn't work with flannel. You can use another network, like calico or weave.

@kevinhooke
Copy link

@donutloop I covered most of the steps I followed in a post here: https://www.kevinhooke.com/2017/10/20/deploying-kubernetes-dashboard-to-a-kubeadm-created-cluster/

@kevinhooke
Copy link

kevinhooke commented Nov 2, 2017

@yarsergio is this a known limitation with kubeadm and/or dashboard, or just what we've discovered through trial and error? If it's a limitation with kubeadm it would be useful if the docs are updated to say not to use flannel, otherwise others will go down this same path and end up stuck too?

@shkrid
Copy link

shkrid commented Dec 5, 2017

Is there any updates or workaround with flannel network? Same issue.

@floreks
Copy link
Member

floreks commented Dec 5, 2017

What issue? This one is related to very old version of Dashboard.

@leehambley
Copy link

My company struggled that in the modern setup (something with RBAC) the dashboard doesn't work. I subscribed only last week to this issue looking for a fix. Since, it looks like something has changed, because I've recently reapplied the curl | bash script and it worked perfectly.

I know that's not super helpful, but my team didn't actually come to an understanding why it wasn't working (looked like RBAC issues in the defaults), and we have no explanation why it started working suddenly when someone else tried it again "just incase"

@shkrid
Copy link

shkrid commented Dec 5, 2017

@flaper87 @leehambley
Same thing.
My first try was 15 days later.
Today i tried again(re-deploy flannel and dashboard) and all works fine. Thanks
Comapre
Commit

@gemfield
Copy link

gemfield commented Aug 4, 2018

@aerobiotic 's solution fixed my problem, there also has a guide for install K8s with kubeadm behind a firewall: https://zhuanlan.zhihu.com/p/40931670

@MaxCCC
Copy link

MaxCCC commented Mar 31, 2019

Same issue, the dashboard only works when I run it on the master node (apiserver) but not on other nodes while I can access the apiserver (telnet)..

@MaxCCC
Copy link

MaxCCC commented Mar 31, 2019

@j0nesin @ianlewis I just realised that dashboard works fine if it is running on the same node as the apiserver.

I am using calico

For me too, but is this normal behaviour?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests