Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DNS is crashlooping #67

Closed
dustinkirkland opened this issue Jul 13, 2018 · 7 comments
Closed

DNS is crashlooping #67

dustinkirkland opened this issue Jul 13, 2018 · 7 comments

Comments

@dustinkirkland
Copy link

@dustinkirkland dustinkirkland commented Jul 13, 2018

$ microk8s.kubectl get all --all-namespaces 
NAMESPACE     NAME                                                  READY     STATUS             RESTARTS   AGE
kube-system   pod/heapster-v1.5.2-84f5c8795f-m466m                  4/4       Running            0          23m
kube-system   pod/kube-dns-864b8bdc77-6mst4                         2/3       CrashLoopBackOff   15         23m
kube-system   pod/kubernetes-dashboard-6948bdb78-262gm              0/1       CrashLoopBackOff   8          23m
kube-system   pod/monitoring-influxdb-grafana-v4-7ffdc569b8-dbmvg   2/2       Running            0          23m

NAMESPACE     NAME                           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
default       service/kubernetes             ClusterIP   10.152.183.1     <none>        443/TCP             23m
kube-system   service/heapster               ClusterIP   10.152.183.109   <none>        80/TCP              23m
kube-system   service/kube-dns               ClusterIP   10.152.183.10    <none>        53/UDP,53/TCP       23m
kube-system   service/kubernetes-dashboard   ClusterIP   10.152.183.178   <none>        443/TCP             23m
kube-system   service/monitoring-grafana     ClusterIP   10.152.183.68    <none>        80/TCP              23m
kube-system   service/monitoring-influxdb    ClusterIP   10.152.183.252   <none>        8083/TCP,8086/TCP   23m

NAMESPACE     NAME                                             DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/heapster-v1.5.2                  1         1         1            1           23m
kube-system   deployment.apps/kube-dns                         1         1         1            0           23m
kube-system   deployment.apps/kubernetes-dashboard             1         1         1            0           23m
kube-system   deployment.apps/monitoring-influxdb-grafana-v4   1         1         1            1           23m

NAMESPACE     NAME                                                        DESIRED   CURRENT   READY     AGE
kube-system   replicaset.apps/heapster-v1.5.2-84f5c8795f                  1         1         1         23m
kube-system   replicaset.apps/kube-dns-864b8bdc77                         1         1         0         23m
kube-system   replicaset.apps/kubernetes-dashboard-6948bdb78              1         1         0         23m
kube-system   replicaset.apps/monitoring-influxdb-grafana-v4-7ffdc569b8   1         1         1         23m

@hyperbolic2346
Copy link

@hyperbolic2346 hyperbolic2346 commented Jul 13, 2018

Anything interesting in the logs for kube-dns?

Loading

@ktsakalozos
Copy link
Member

@ktsakalozos ktsakalozos commented Jul 13, 2018

@dustinkirkland can you also make sure there is not firewall blocking dns as reported in this issue: #66

Loading

@dustinkirkland
Copy link
Author

@dustinkirkland dustinkirkland commented Jul 13, 2018

Loading

@tvansteenburgh
Copy link
Member

@tvansteenburgh tvansteenburgh commented Jul 13, 2018

Inspecting the ufw log showed that all the denials were happening on the cbr0 interface.

ubuntu@ip:~$ ifconfig cbr0
cbr0      Link encap:Ethernet  HWaddr 0a:58:0a:01:01:01  
          inet addr:10.1.1.1  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::e0d0:96ff:fee2:633e/64 Scope:Link
          UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
          RX packets:5577 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4904 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:989161 (989.1 KB)  TX bytes:2862109 (2.8 MB)

The 10.1.1.0/32 subnet corresponds to the pod IP addresses:

ubuntu@ip:~$ microk8s.kubectl get po -n kube-system -o wide
NAME                                              READY     STATUS    RESTARTS   AGE       IP         NODE
heapster-v1.5.2-577898ddbf-8mz8j                  4/4       Running   0          9m        10.1.1.7   ip-172-31-19-85
kube-dns-864b8bdc77-4n5s9                         3/3       Running   6          15m       10.1.1.2   ip-172-31-19-85
kubernetes-dashboard-6948bdb78-62n4r              1/1       Running   5          15m       10.1.1.4   ip-172-31-19-85
monitoring-influxdb-grafana-v4-7ffdc569b8-2t2b4   2/2       Running   0          15m       10.1.1.3   ip-172-31-19-85

So the fix was:

sudo ufw allow in on cbr0 && sudo ufw allow out on cbr0

Loading

@dustinkirkland
Copy link
Author

@dustinkirkland dustinkirkland commented Jul 16, 2018

Loading

@reyou
Copy link

@reyou reyou commented Dec 27, 2018

@tvansteenburgh OMG! I was hitting my head to wall for a week for this, works like a charm! Thanks a ton!

Loading

@oz123
Copy link

@oz123 oz123 commented Feb 26, 2020

sudo ufw allow in on cbr0 && sudo ufw allow out on cbr0

Don't blindly copy this, ufw does now complain if there is no cbr0.
Use:

sudo brctl show

Then find out which bridge it is actually (mine was cni0).

Loading

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
6 participants