Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DNS doesn't seem to able to look up external addresses #1975

Closed
joedborg opened this issue Feb 5, 2021 · 6 comments
Closed

DNS doesn't seem to able to look up external addresses #1975

joedborg opened this issue Feb 5, 2021 · 6 comments

Comments

@joedborg
Copy link
Contributor

joedborg commented Feb 5, 2021

$ sudo snap install microk8s --classic
$ sudo microk8s enable dns

$ sudo microk8s kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml

$ sudo microk8s kubectl get all --all-namespaces
NAMESPACE     NAME                                          READY   STATUS    RESTARTS   AGE
default       pod/dnsutils                                  1/1     Running   0          2m25s
kube-system   pod/coredns-86f78bb79c-t79sd                  1/1     Running   0          35s
kube-system   pod/calico-node-zg58n                         1/1     Running   0          4m7s
kube-system   pod/calico-kube-controllers-847c8c99d-ltfm4   1/1     Running   0          4m7s

NAMESPACE     NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.152.183.1    <none>        443/TCP                  4m12s
kube-system   service/kube-dns     ClusterIP   10.152.183.10   <none>        53/UDP,53/TCP,9153/TCP   35s

NAMESPACE     NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/calico-node   1         1         1       1            1           kubernetes.io/os=linux   4m12s

NAMESPACE     NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns                   1/1     1            1           35s
kube-system   deployment.apps/calico-kube-controllers   1/1     1            1           4m12s

NAMESPACE     NAME                                                DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/coredns-86f78bb79c                  1         1         1       35s
kube-system   replicaset.apps/calico-kube-controllers-847c8c99d   1         1         1       4m8s

DNS debug found here https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/

But can't seem to get anything outside:

$ kubectl exec -i -t dnsutils -- nslookup github.com
;; connection timed out; no servers could be reached

command terminated with exit code 1

$ kubectl exec -i -t dnsutils -- nslookup bbc.co.uk
;; connection timed out; no servers could be reached

command terminated with exit code 1

$ kubectl exec -i -t dnsutils -- nslookup kubernetes.default
;; connection timed out; no servers could be reached

command terminated with exit code 1

@joedborg joedborg assigned joedborg and unassigned joedborg Feb 5, 2021
@joedborg
Copy link
Contributor Author

joedborg commented Feb 5, 2021

If I disable ha-cluster, this works as expected:

$ sudo snap install microk8s --classic
$ sudo microk8s disable ha-cluster
$ sudo microk8s enable dns

$ sudo microk8s kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml

$ sudo microk8s kubectl get all --all-namespaces
NAMESPACE     NAME                           READY   STATUS    RESTARTS   AGE
default       pod/dnsutils                   1/1     Running   0          2m6s
kube-system   pod/coredns-86f78bb79c-mpmfv   1/1     Running   0          2m43s

NAMESPACE     NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.152.183.1    <none>        443/TCP                  5m29s
kube-system   service/kube-dns     ClusterIP   10.152.183.10   <none>        53/UDP,53/TCP,9153/TCP   2m43s

NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns   1/1     1            1           2m43s

NAMESPACE     NAME                                 DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/coredns-86f78bb79c   1         1         1       2m43s

$ sudo microk8s kubectl exec -i -t dnsutils -- nslookup kubernetes.default
Server:		10.152.183.10
Address:	10.152.183.10#53

Name:	kubernetes.default.svc.cluster.local
Address: 10.152.183.1

$ sudo microk8s kubectl exec -i -t dnsutils -- nslookup github.com
Server:		10.152.183.10
Address:	10.152.183.10#53

Non-authoritative answer:
Name:	github.com
Address: 140.82.112.4

@joedborg
Copy link
Contributor Author

joedborg commented Feb 5, 2021

Whilst writing the attached test, it to be working okay, without disabling HA on my local machine so might be a cloud issue.

@joedborg
Copy link
Contributor Author

joedborg commented Feb 9, 2021

This is due to a br_netfilter not being loaded correctly in the bootstrap scripts. We don't see this on local machines because it's already loaded and it's not on bare cloud images.

@ktsakalozos can you please attach the PR where this was fixed?

@joedborg
Copy link
Contributor Author

joedborg commented Feb 9, 2021

Fixed in #1985

@joedborg joedborg closed this as completed Feb 9, 2021
@faireai-fmonorchio
Copy link

I have this exatcly problem with version v1.23.3 on debian 11 in a ha setup with 3 nodes. Dns pod and service are running correctly. Any help about this?

@natanhp
Copy link

natanhp commented Aug 12, 2022

I got this error with Microk8s v1.24.3 on CentOS 7.9.2009

kubectl exec -i -t dnsutils -- nslookup kubernetes.default

;; connection timed out; no servers could be reached

command terminated with exit code 1

Update

I managed to solve the issue by enabling ip masquarade

firewall-cmd --add-masquerade --permanent
firewall-cmd --reload

as I mentioned it in #2407 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants