-
Notifications
You must be signed in to change notification settings - Fork 1
Kubernetes Dashboard
Franknaw edited this page Jan 29, 2021
·
7 revisions
- Install dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.1.0/aio/deploy/recommended.yaml
- Update service with NodePort attribute. This provides a Service on each host IP address on a static port.
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.1.0/aio/deploy/recommended.yaml
- Rename downloaded file and edit to add "NodePort"
mv recommended.yaml kubernetes-dashboard-deployment.yml
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
type: NodePort
- Apply update
kubectl apply -f kubernetes-dashboard-deployment.yml
- Check status
kubectl get deployments -n kubernetes-dashboard
- Create modules
kubectl get pods -n kubernetes-dashboard
- Check service
kubectl get services -n kubernetes-dashboard
- Create manifest file. This creates a service account for administrator user.
vi admin-sa.yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-admin
namespace: kube-system
-
Apply
kubectl apply -f admin-sa.yml
-
Create cluster admin role
vi admin-rbac.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kube-admin
namespace: kube-system
-
Apply
kubectl apply -f admin-rbac.yml
-
Create auth token
SA_NAME="kube-admin"
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep ${SA_NAME} | awk '{print $1}')
Name: kube-admin-token-gjqz6
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: kube-admin
kubernetes.io/service-account.uid: 5c7de451-88d6-444b-84b0-e60ce015a557
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1066 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6InBqOERnR0FaSVZoNi1PMS05OHBUbmdNVjA2amJXMTk3MXZuaWt1Z2VCVVUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlLWFkbWluLXRva2VuLWdqcXo2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Imt1YmUtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI1YzdkZTQ1MS04OGQ2LTQ0NGItODRiMC1lNjBjZTAxNWE1NTciLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06a3ViZS1hZG1pbiJ9.dvHefOZCNT400E-SaEpVoW03Nm6hrinhrihSmmKCEXMRBYv85nrc_a5lzDElB4HlzPBrgOd9K7za9o7YXScx2P2oFZ45BtaoHGV4eS1N5wDMBwojhYb-3NGIJR8bjY4GEDHTRqK9iIfCc5q8pCl1JVh9NYU3UWi-uk3LZWI9Cpa6wpuCpetRSTT-4OLHicpDS7y_mltLJdEelugEPG0CTxlY0m4CLhfvJNr8Ohm1-XEYEbam7RBMoaPvmy6gtPx0xFg6vvijLbrpfXnbwFTN1uMt9J3BUGLi5V9iI2ZSS9UP9grfaL6JCpuqP6qQhx7dvdj6ipD0YRz6L_5X7PQvAA
-
Show dashboard specs.
kubectl get service -n kubernetes-dashboard | grep dashboard
-
Check pods status
kubectl get pods -o wide --all-namespaces
-
Ensure dashboard runs on master. In My Case.
$ kubectl get pods -o wide -n kubernetes-dashboard
kubernetes-dashboard dashboard-metrics-scraper-79c5968bdc-wkgmp 1/1 Running 0 19s 192.168.1.2 node-1 <none> <none>
kubernetes-dashboard kubernetes-dashboard-7448ffc97b-rfxtw 1/1 Running 0 19s 192.168.2.2 node-2 <none> <none>
- The below step can be averted by stopping node scheduling.
kubectl cordon node-1 node-2
This is kept for reference.
$ kubectl drain node-1 node-2 -n kubernetes-dashboard --ignore-daemonsets --delete-emptydir-data
$ kubectl get pods -o wide -n kubernetes-dashboard
kubernetes-dashboard dashboard-metrics-scraper-79c5968bdc-rwhlh 1/1 Running 0 43s 192.168.0.4 k8s-master <none> <none>
kubernetes-dashboard kubernetes-dashboard-7448ffc97b-lkxq8 1/1 Running 0 35s 192.168.0.5 k8s-master <none> <none>
-
Resume "Node" scheduling
kubectl uncordon node-1 node-2
-
View logs
kubectl logs --namespace=kubernetes-dashboard kubernetes-dashboard-7448ffc97b-lkxq8
2021/01/25 17:37:09 Starting overwatch
2021/01/25 17:37:09 Using namespace: kubernetes-dashboard
2021/01/25 17:37:09 Using in-cluster config to connect to apiserver
2021/01/25 17:37:09 Using secret token for csrf signing
2021/01/25 17:37:09 Initializing csrf token from kubernetes-dashboard-csrf secret
2021/01/25 17:37:09 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2021/01/25 17:37:09 Successful initial request to the apiserver, version: v1.20.2
2021/01/25 17:37:09 Generating JWE encryption key
2021/01/25 17:37:09 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2021/01/25 17:37:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2021/01/25 17:37:09 Initializing JWE encryption key from synchronized object
2021/01/25 17:37:09 Creating in-cluster Sidecar client
2021/01/25 17:37:09 Auto-generating certificates
2021/01/25 17:37:09 Successful request to sidecar
2021/01/25 17:37:09 Successfully created certificates
2021/01/25 17:37:09 Serving securely on HTTPS port: 8443
-
Start proxy for access
kubectl proxy
-
Create SSH tunnel to master for dashboard access.
ssh -i /home/tnaw/.vagrant.d/insecure_private_key -L 8001:localhost:8001 vagrant@192.168.50.10
-
Open following URL in browser.
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/
-
Authenticate using token from prior step.
-
Delete dashboard if desired
kubectl -n kubernetes-dashboard delete pod,svc --all