Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

While heapster is being deprecated : which dashboard version should I use ? #4202

Closed
snouffelaire opened this issue Aug 16, 2019 · 1 comment
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@snouffelaire
Copy link

snouffelaire commented Aug 16, 2019

Hy

I'm new in kubernetes world, so forgive me if I'm writing mistakes. I'm trying to deploy kubernetes dashboard

My cluster is containing three masters and 3 workers drained and not schedulable in order to install dashboard on one of the masters nodes :

[root@pp-tmp-test20 ~]# kubectl get nodes

NAME            STATUS                     ROLES    AGE    VERSION
pp-tmp-test20   Ready                      master   2d2h   v1.15.2
pp-tmp-test21   Ready                      master   37h    v1.15.2
pp-tmp-test22   Ready                      master   37h    v1.15.2
pp-tmp-test23   Ready,SchedulingDisabled   worker   36h    v1.15.2
pp-tmp-test24   Ready,SchedulingDisabled   worker   36h    v1.15.2
pp-tmp-test25   Ready,SchedulingDisabled   worker   36h    v1.15.2
# kubectl version

Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T09:23:26Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T09:15:22Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
  • So I created simple admin user
# vi dashboard-adminuser.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system

# vi dashboard-adminuser-ClusterRoleBinding.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

# kubectl apply -f dashboard-adminuser.yaml
serviceaccount/admin-user created

# kubectl apply -f dashboard-adminuser-ClusterRoleBinding.yaml
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
  • After this, a pod kubernetes-dashboard-5698d5bc9-ql6q8 is scheduled on my master node pp-tmp-test20/172.31.68.220

NAMESPACE     NAME                                    READY   STATUS    RESTARTS   AGE
kube-system   coredns-5c98db65d4-22klq                1/1     Running   1          3d13h
kube-system   coredns-5c98db65d4-rdzgs                1/1     Running   1          3d13h
kube-system   etcd-pp-tmp-test20                      1/1     Running   1          3d13h
kube-system   etcd-pp-tmp-test21                      1/1     Running   1          3d
kube-system   etcd-pp-tmp-test22                      1/1     Running   2          3d
kube-system   kube-apiserver-pp-tmp-test20            1/1     Running   1          3d13h
kube-system   kube-apiserver-pp-tmp-test21            1/1     Running   1          3d
kube-system   kube-apiserver-pp-tmp-test22            1/1     Running   2          3d
kube-system   kube-controller-manager-pp-tmp-test20   1/1     Running   3          3d13h
kube-system   kube-controller-manager-pp-tmp-test21   1/1     Running   1          3d
kube-system   kube-controller-manager-pp-tmp-test22   1/1     Running   2          3d
kube-system   kube-flannel-ds-amd64-59rvh             1/1     Running   2          2d23h
kube-system   kube-flannel-ds-amd64-9f7j9             1/1     Running   5          3d
kube-system   kube-flannel-ds-amd64-bz6tx             1/1     Running   1          3d13h
kube-system   kube-flannel-ds-amd64-hkgl5             1/1     Running   3          2d23h
kube-system   kube-flannel-ds-amd64-pv9vb             1/1     Running   2          3d
kube-system   kube-flannel-ds-amd64-wgvg5             1/1     Running   3          3d
kube-system   kube-proxy-4pttg                        1/1     Running   2          2d23h
kube-system   kube-proxy-5bj6h                        1/1     Running   2          2d23h
kube-system   kube-proxy-hbxvr                        1/1     Running   1          3d
kube-system   kube-proxy-qfhn9                        1/1     Running   1          3d13h
kube-system   kube-proxy-tsns4                        1/1     Running   2          3d
kube-system   kube-proxy-zdmnx                        1/1     Running   4          3d
kube-system   kube-scheduler-pp-tmp-test20            1/1     Running   3          3d13h
kube-system   kube-scheduler-pp-tmp-test21            1/1     Running   1          3d
kube-system   kube-scheduler-pp-tmp-test22            1/1     Running   2          3d
kube-system   kubernetes-dashboard-5698d5bc9-ql6q8   1/1     Running   0          71m
  • the pod's logs
[root@pp-tmp-test20 ~]# kubectl logs kubernetes-dashboard-5698d5bc9-ql6q8 -n kube-system

2019/08/14 10:14:57 Starting overwatch
2019/08/14 10:14:57 Using in-cluster config to connect to apiserver
2019/08/14 10:14:57 Using service account token for csrf signing
2019/08/14 10:14:58 Successful initial request to the apiserver, version: v1.15.2
2019/08/14 10:14:58 Generating JWE encryption key
2019/08/14 10:14:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2019/08/14 10:14:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2019/08/14 10:14:59 Initializing JWE encryption key from synchronized object
2019/08/14 10:14:59 Creating in-cluster Heapster client
2019/08/14 10:14:59 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/08/14 10:14:59 Auto-generating certificates
2019/08/14 10:14:59 Successfully created certificates
2019/08/14 10:14:59 Serving securely on HTTPS port: 8443
2019/08/14 10:15:29 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/08/14 10:15:59 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
  • the describe of the pod
[root@pp-tmp-test20 ~]# kubectl describe pob kubernetes-dashboard-5698d5bc9-ql6q8 -n kube-system

Name:           kubernetes-dashboard-5698d5bc9-ql6q8
Namespace:      kube-system
Priority:       0
Node:           pp-tmp-test20/172.31.68.220
Start Time:     Wed, 14 Aug 2019 16:58:39 +0200
Labels:         k8s-app=kubernetes-dashboard
                pod-template-hash=5698d5bc9
Annotations:    <none>
Status:         Running
IP:             10.244.0.7
Controlled By:  ReplicaSet/kubernetes-dashboard-5698d5bc9
Containers:
  kubernetes-dashboard:
    Container ID:  docker://40edddf7a9102d15e3b22f4bc6f08b3a07a19e4841f09360daefbce0486baf0e
    Image:         k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
    Image ID:      docker-pullable://k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      --auto-generate-certificates
    State:          Running
      Started:      Wed, 14 Aug 2019 16:58:43 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Wed, 14 Aug 2019 16:58:41 +0200
      Finished:     Wed, 14 Aug 2019 16:58:42 +0200
    Ready:          True
    Restart Count:  1
    Liveness:       http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /certs from kubernetes-dashboard-certs (rw)
      /tmp from tmp-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-ptw78 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  kubernetes-dashboard-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-certs
    Optional:    false
  tmp-volume:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  kubernetes-dashboard-token-ptw78:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-token-ptw78
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  dashboard=true
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age                    From                                  Message
  ----    ------     ----                   ----                                                     -------
  Normal  Scheduled  2m41s                  default-scheduler       Successfully assigned kube-system/kubernetes-dashboard-5698d5bc9-ql6q8 to pp-tmp-test20
  Normal  Pulled     2m38s (x2 over 2m40s)  kubelet, pp-tmp-test20  Container image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1" already present on machine
  Normal  Created    2m37s (x2 over 2m39s)  kubelet, pp-tmp-test20  Created container kubernetes-dashboard
  Normal  Started    2m37s (x2 over 2m39s)  kubelet, pp-tmp-test20  Started container kubernetes-dashboard
  • the describe of the dashboard service
[root@pp-tmp-test20 ~]# kubectl describe svc/kubernetes-dashboard -n kube-system

Name:              kubernetes-dashboard
Namespace:         kube-system
Labels:            k8s-app=kubernetes-dashboard
Annotations:       <none>
Selector:          k8s-app=kubernetes-dashboard
Type:              ClusterIP
IP:                10.110.236.88
Port:              <unset>  443/TCP
TargetPort:        8443/TCP
Endpoints:         10.244.0.7:8443
Session Affinity:  None
Events:            <none>
  • the docker ps on my master running the pod
[root@pp-tmp-test20 ~]# Docker ps

CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS               NAMES
40edddf7a910        f9aed6605b81           "/dashboard --inse..."   7 minutes ago       Up 7 minutes                            k8s_kubernetes-dashboard_kubernetes-dashboard-5698d5bc9-ql6q8_kube-system_f785d4bd-2e67-4daa-9f6c-19f98582fccb_1
e7f3820f1cf2        k8s.gcr.io/pause:3.1   "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_kubernetes-dashboard-5698d5bc9-ql6q8_kube-system_f785d4bd-2e67-4daa-9f6c-19f98582fccb_0

[root@pp-tmp-test20 ~]# docker logs 40edddf7a910
2019/08/14 14:58:43 Starting overwatch
2019/08/14 14:58:43 Using in-cluster config to connect to apiserver
2019/08/14 14:58:43 Using service account token for csrf signing
2019/08/14 14:58:44 Successful initial request to the apiserver, version: v1.15.2
2019/08/14 14:58:44 Generating JWE encryption key
2019/08/14 14:58:44 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2019/08/14 14:58:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2019/08/14 14:58:44 Initializing JWE encryption key from synchronized object
2019/08/14 14:58:44 Creating in-cluster Heapster client
2019/08/14 14:58:44 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/08/14 14:58:44 Auto-generating certificates
2019/08/14 14:58:44 Successfully created certificates
2019/08/14 14:58:44 Serving securely on HTTPS port: 8443
2019/08/14 14:59:14 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/08/14 14:59:44 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/08/14 15:00:14 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
  • So I try to launch the dashboard

1/ On my master I start the proxy

[root@pp-tmp-test20 ~]# kubectl proxy
Starting to serve on 127.0.0.1:8001

2/ I curl the url

[root@pp-tmp-test20 ~]# curl 127.0.0.1:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

 <!doctype html> <html ng-app="kubernetesDashboard"> <head> <meta charset="utf-8"> <title ng-controller="kdTitle as $ctrl" ng-bind="$ctrl.title()"></title> <link rel="icon" type="image/png" href="assets/images/kubernetes-logo.png"> <meta name="viewport" content="width=device-width"> <link rel="stylesheet" href="static/vendor.93db0a0d.css"> <link rel="stylesheet" href="static/app.ddd3b5ec.css"> </head> <body ng-controller="kdMain as $ctrl"> <!--[if lt IE 10]>
      <p class="browsehappy">You are using an <strong>outdated</strong> browser.
      Please <a href="http://browsehappy.com/">upgrade your browser</a> to improve your
      experience.</p>
    <![endif]--> <kd-login layout="column" layout-fill="" ng-if="$ctrl.isLoginState()"> </kd-login> <kd-chrome layout="column" layout-fill="" ng-if="!$ctrl.isLoginState()"> </kd-chrome> <script src="static/vendor.bd425c26.js"></script> <script src="api/appConfig.json"></script> <script src="static/app.91a96542.js"></script> </body> </html> [root@pp-tmp-test20 ~]#

few second later.. the same command

[root@pp-tmp-test20 ~]# curl 127.0.0.1:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
Error: 'dial tcp 10.244.0.7:8443: connect: no route to host'

3/ I launch firefox with x11 redirect from my master and hit this url

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login

this is the error message I get in the browser

Error: 'dial tcp 10.244.0.7:8443: connect: no route to host'
Trying to reach: 'https://10.244.0.7:8443/'

In the same time i got these errors from the console where I launched the proxy

I0814 16:10:05.836114   20240 log.go:172] http: proxy error: context canceled
I0814 16:10:06.198701   20240 log.go:172] http: proxy error: context canceled
I0814 16:13:21.708190   20240 log.go:172] http: proxy error: unexpected EOF
I0814 16:13:21.708229   20240 log.go:172] http: proxy error: unexpected EOF
I0814 16:13:21.708270   20240 log.go:172] http: proxy error: unexpected EOF
I0814 16:13:39.335483   20240 log.go:172] http: proxy error: context canceled
I0814 16:13:39.716360   20240 log.go:172] http: proxy error: context canceled

but after refresh n times (randomly) the browser I'm able to reach the login interface to enter the token (created before)

dashboardpba

But... the same error occur again

dashboardpb1

After hit n times the 'sign in' button I'm able to get the dashboard.. for few seconds.

dashboardpb0

dashboardpb5

after that the dashboard start to produce the same errors when I'm am exploring the interface:

dashboardpb3

dashboardpb4

I looked the pod's logs, we can see some trafic :

[root@pp-tmp-test20 ~]# kubectl logs kubernetes-dashboard-5698d5bc9-ql6q8  -n kube-system

2019/08/14 14:16:56 Getting list of all services in the cluster
2019/08/14 14:16:56 [2019-08-14T14:16:56Z] Outcoming response to 10.244.0.1:56140 with 200 status code
2019/08/14 14:17:01 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Incoming HTTP/2.0 GET /api/v1/login/status request from 10.244.0.1:56140: {}
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Outcoming response to 10.244.0.1:56140 with 200 status code
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Incoming HTTP/2.0 GET /api/v1/csrftoken/token request from 10.244.0.1:56140: {}
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Outcoming response to 10.244.0.1:56140 with 200 status code
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Incoming HTTP/2.0 POST /api/v1/token/refresh request from 10.244.0.1:56140: { contents hidden }
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Outcoming response to 10.244.0.1:56140 with 200 status code
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Incoming HTTP/2.0 GET /api/v1/settings/global/cani request from 10.244.0.1:56140: {}
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Outcoming response to 10.244.0.1:56140 with 200 status code
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Incoming HTTP/2.0 GET /api/v1/settings/global request from 10.244.0.1:56140: {}
2019/08/14 14:17:22 Cannot find settings config map: configmaps "kubernetes-dashboard-settings" not found

and again the pod logs few seconds later

[root@pp-tmp-test20 ~]# kubectl logs kubernetes-dashboard-5698d5bc9-ql6q8  -n kube-system

Error from server: Get https://172.31.68.220:10250/containerLogs/kube-system/kubernetes-dashboard-5698d5bc9-ql6q8/kubernetes-dashboard: Forbidden

I tried to forward port, but I loose the connection to the pod

# kubectl --namespace=kube-system port-forward kubernetes-dashboard-5698d5bc9-ql6q8 8443

Forwarding from 127.0.0.1:8443 -> 8443
E0816 10:55:16.301541   17282 portforward.go:233] lost connection to pod

By looking for help I just read heapster is being deprecated :
Heapster is marked deprecated as of Kubernetes 1.11. Users are encouraged to use metrics-server instead, potentially supplemented by a third-party monitoring solution, such as Prometheus.

kubernetes-retired/heapster

Support metrics API

Heapster Deprecation Timeline

What I'm doing wrong ?

Which version / url of kubernetes-dashboard.yaml should I use to work with my kubernetes version ?

Should I install metrics ? but I don't know if it works with my kubernetes version 1.15 : https://github.com/kubernetes/dashboard/releases/tag/v1.10.0

image

thank you very much for your help !

@snouffelaire snouffelaire added the kind/support Categorizes issue or PR as a support question. label Aug 16, 2019
@snouffelaire
Copy link
Author

snouffelaire commented Aug 16, 2019

the answer is here : https://github.com/kubernetes/dashboard/releases. sorry
# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta3/aio/deploy/recommended.yaml

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

1 participant