Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MicroK8s dashboard not working- CentOS #3188

Closed
varmaranjith opened this issue Jun 2, 2022 · 7 comments
Closed

MicroK8s dashboard not working- CentOS #3188

varmaranjith opened this issue Jun 2, 2022 · 7 comments
Labels

Comments

@varmaranjith
Copy link

varmaranjith commented Jun 2, 2022

Hi Experts,
I have installed MicroK8s on a Centos VM. Enabled the below using command :
microk8s enable dns helm3 registry metallb dashboard storage

Also containers to resolve local DNS, I have done the following with the commands specified.

# Add resolv-conf flag to Kubelet configuration
echo "--resolv-conf=/etc/resolv.conf" >> /var/snap/microk8s/current/args/kubelet
# Restart Kubelet
service snap.microk8s.daemon-kubelet restart
#replace google dns "8.8.8.8 8.8.4.4" with //etc/resolv.conf 
kubectl edit configmap -n kube-system coredns
#delete coredns-xxxxx pod - the pod will recreate by itself with above change
kubectl delete pod coredns-xxx -n kube-system

Then started the proxy with command:

microk8s dashboard-proxy

I am getting the below error message:

===============
Checking if Dashboard is running.
Infer repository core for addon dashboard
Waiting for Dashboard to come up.
error: timed out waiting for the condition on deployments/kubernetes-dashboard
Traceback (most recent call last):
  File "/snap/microk8s/3272/scripts/wrappers/dashboard_proxy.py", line 92, in <module>
    dashboard_proxy()
  File "/snap/microk8s/3272/usr/lib/python3/dist-packages/click/core.py", line 722, in __call__
    return self.main(*args, **kwargs)
  File "/snap/microk8s/3272/usr/lib/python3/dist-packages/click/core.py", line 697, in main
    rv = self.invoke(ctx)
  File "/snap/microk8s/3272/usr/lib/python3/dist-packages/click/core.py", line 895, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/snap/microk8s/3272/usr/lib/python3/dist-packages/click/core.py", line 535, in invoke
    return callback(*args, **kwargs)
  File "/snap/microk8s/3272/scripts/wrappers/dashboard_proxy.py", line 46, in dashboard_proxy
    check_output(command)
  File "/snap/microk8s/3272/usr/lib/python3.6/subprocess.py", line 356, in check_output
    **kwargs).stdout
  File "/snap/microk8s/3272/usr/lib/python3.6/subprocess.py", line 438, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['/snap/microk8s/3272/microk8s-kubectl.wrapper', '-n', 'kube-system', 'wait', '--timeout=240s', 'deployment', 'kubernetes-dashboard', '--for', 'condition=available']' returned non-zero exit status 1.
=============

Any help appreciated.

Many Thanks,
/Ranjith

@ktsakalozos
Copy link
Member

Hi @varmaranjith how does your microk8s kubectl get all -A look like? It would help if you could share a microk8s inspect tarball.

@varmaranjith
Copy link
Author

varmaranjith commented Jun 7, 2022

Hi @ktsakalozos ,
Here is the output and the tarball.
root@exstreamvm:~# microk8s kubectl get all -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/calico-kube-controllers-9969d55bb-j7dpp 0/1 Pending 0 4d18h
kube-system pod/calico-node-snzsg 0/1 Init:0/3 0 4d18h
kube-system pod/hostpath-provisioner-76f65f69ff-99qc4 0/1 Pending 0 4d18h
container-registry pod/registry-f69889b8c-sk2ps 0/1 Pending 0 4d18h
metallb-system pod/controller-6dfbf9b9c6-w44k8 0/1 Pending 0 4d18h
kube-system pod/metrics-server-5f8f64cb86-h44d2 0/1 Pending 0 4d18h
kube-system pod/dashboard-metrics-scraper-6b6f796c8d-vt9t4 0/1 Pending 0 4d18h
kube-system pod/kubernetes-dashboard-765646474b-89wzb 0/1 Pending 0 4d18h
kube-system pod/coredns-66bcf65bb8-dzlqz 0/1 Pending 0 2m6s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.152.183.1 443/TCP 9d
kube-system service/kube-dns ClusterIP 10.152.183.10 53/UDP,53/TCP,9153/TCP 4d18h
container-registry service/registry NodePort 10.152.183.85 5000:32000/TCP 4d18h
kube-system service/metrics-server ClusterIP 10.152.183.203 443/TCP 4d18h
kube-system service/kubernetes-dashboard ClusterIP 10.152.183.112 443/TCP 4d18h
kube-system service/dashboard-metrics-scraper ClusterIP 10.152.183.73 8000/TCP 4d18h

NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/calico-node 1 1 0 1 0 kubernetes.io/os=linux 4d18h
metallb-system daemonset.apps/speaker 0 0 0 0 0 beta.kubernetes.io/os=linux 4d18h

NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/calico-kube-controllers 0/1 1 0 4d18h
kube-system deployment.apps/hostpath-provisioner 0/1 1 0 4d18h
container-registry deployment.apps/registry 0/1 1 0 4d18h
metallb-system deployment.apps/controller 0/1 1 0 4d18h
kube-system deployment.apps/metrics-server 0/1 1 0 4d18h
kube-system deployment.apps/dashboard-metrics-scraper 0/1 1 0 4d18h
kube-system deployment.apps/kubernetes-dashboard 0/1 1 0 4d18h
kube-system deployment.apps/coredns 0/1 1 0 4d18h

NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/calico-kube-controllers-9969d55bb 1 1 0 4d18h
kube-system replicaset.apps/hostpath-provisioner-76f65f69ff 1 1 0 4d18h
container-registry replicaset.apps/registry-f69889b8c 1 1 0 4d18h
metallb-system replicaset.apps/controller-6dfbf9b9c6 1 1 0 4d18h
kube-system replicaset.apps/metrics-server-5f8f64cb86 1 1 0 4d18h
kube-system replicaset.apps/dashboard-metrics-scraper-6b6f796c8d 1 1 0 4d18h
kube-system replicaset.apps/kubernetes-dashboard-765646474b 1 1 0 4d18h
kube-system replicaset.apps/coredns-66bcf65bb8 1 1 0 4d18h

=====
root@exstreamvm:~# microk8s inspect
Inspecting system
Inspecting Certificates
Inspecting services
Service snap.microk8s.daemon-cluster-agent is running
Service snap.microk8s.daemon-containerd is running
Service snap.microk8s.daemon-kubelite is running
Service snap.microk8s.daemon-k8s-dqlite is running
Service snap.microk8s.daemon-apiserver-kicker is running
Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
Copy processes list to the final report tarball
Copy snap list to the final report tarball
Copy VM name (or none) to the final report tarball
Copy disk usage information to the final report tarball
Copy memory usage information to the final report tarball
Copy server uptime to the final report tarball
Copy current linux distribution to the final report tarball
Copy openSSL information to the final report tarball
Copy network configuration to the final report tarball
Inspecting kubernetes cluster
Inspect kubernetes cluster
Inspecting dqlite
Inspect dqlite

Building the report tarball
Report tarball is at /var/snap/microk8s/3272/inspection-report-20220607_081105.tar.gz

inspection-report-20220607_081105.tar.gz
Many Thanks,
Ranjith

@neoaggelos
Copy link
Member

Hi @varmaranjith

In the inspection report I see the following line repeated:

Jun 01 03:04:37 exstreamvm microk8s.daemon-containerd[3260538]: time="2022-06-01T03:04:37.974960527+05:30" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z74lg,Uid:040eb2a6-3846-474c-b575-d4c367d7bc67,Namespace:kube-system,Attempt:0,} failed, error" error="failed to get sandbox image \"k8s.gcr.io/pause:3.1\": failed to pull image \"k8s.gcr.io/pause:3.1\": failed to pull and unpack image \"k8s.gcr.io/pause:3.1\": failed to resolve reference \"k8s.gcr.io/pause:3.1\": failed to do request: Head \"https://k8s.gcr.io/v2/pause/manifests/3.1\": dial tcp 142.251.10.82:443: i/o timeout"

This means that containerd cannot fetch k8s.gcr.io/pause:3.1, which is the sandbox image used for creating pods. Perhaps k8s.gcr.io is not available from your area? Can you test if #3221 (comment) fixes this?

@neoaggelos
Copy link
Member

The process to configure a registry mirror for k8s.gcr.io has been added to the documentation as well, see https://microk8s.io/docs/registry-private#configure-registry-mirrors-7

@Buraillc
Copy link

Buraillc commented Aug 8, 2022

have same issue. Did anyone here solve the issue?

@uniuuu
Copy link

uniuuu commented Apr 3, 2023

Hi @varmaranjith

In the inspection report I see the following line repeated:

Jun 01 03:04:37 exstreamvm microk8s.daemon-containerd[3260538]: time="2022-06-01T03:04:37.974960527+05:30" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z74lg,Uid:040eb2a6-3846-474c-b575-d4c367d7bc67,Namespace:kube-system,Attempt:0,} failed, error" error="failed to get sandbox image \"k8s.gcr.io/pause:3.1\": failed to pull image \"k8s.gcr.io/pause:3.1\": failed to pull and unpack image \"k8s.gcr.io/pause:3.1\": failed to resolve reference \"k8s.gcr.io/pause:3.1\": failed to do request: Head \"https://k8s.gcr.io/v2/pause/manifests/3.1\": dial tcp 142.251.10.82:443: i/o timeout"

This means that containerd cannot fetch k8s.gcr.io/pause:3.1, which is the sandbox image used for creating pods. Perhaps k8s.gcr.io is not available from your area? Can you test if #3221 (comment) fixes this?

have same issue. Did anyone here solve the issue?

I have a similar issue. In my case I couldn't enable metallb with error: Waiting for "Metallb" controller to be ready. "error: timed out waiting" for the condition on deployments/controller
Without looking at my logs first I have connected host server to some free openvpn server found in Internet to narrow down issue from a blocking by my ISP. And then run microk8s disable metallb && microk8s enable metallb
And it resolved issue. My ISP infamous for blocking several targeted websites and all other websites sitting on same IP range are getting blocked too.
You can try this workaround and see if this is your case.

Copy link

stale bot commented Feb 27, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the inactive label Feb 27, 2024
@stale stale bot closed this as completed Mar 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants