-
Notifications
You must be signed in to change notification settings - Fork 772
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
my pods can't resolve hostname / reach DNS server #2206
Comments
Maybe you need to open some firewall between this host and your dns servers? |
hi balchua I can reach any of these DNS servers from the host machine: [alex@snapqa6 ~]$ telnet 16.110.135.51 53 the routes are below: no firewall is running: [alex@snapqa6 ~]$ service firewalld status [alex@snapqa6 ~]$ service nftables status |
I think i know whats happening. The flannel cni isnt starting. Due to this error in awk
Maybe missing some libraries? Googling a bit, this seems to be related to ncurses. But im not sure. If anyone is familiar with this |
thanks balchua |
Hello dear microk8s team
I'm facing an issue with accessing DNS server within pod
==========
see running pods
[alex@snapqa6 ~] k get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/nginx-6799fc88d8-g6ndp 1/1 Running 2 92m
kube-system pod/coredns-7f9c69c78c-rrjlh 1/1 Running 1 79m
kube-system pod/hostpath-provisioner-5c65fbdb4f-m4ksd 1/1 Running 2 10h
==========
go into nginx pod
[alex@snapqa6 ~] k exec -ti pod/nginx-6799fc88d8-g6ndp -- /bin/bash
==========
curl google.com
root@nginx-6799fc88d8-g6ndp:/# curl http://google.com
curl: (6) Could not resolve host: google.com
root@nginx-6799fc88d8-g6ndp:/# exit
==========
iptables are off:
[root@snapqa6 ~]# service iptables status
Redirecting to /bin/systemctl status iptables.service
Unit iptables.service could not be found.
==========
DNS is configured:
[alex@snapqa6 ~]$ microk8s kubectl -n kube-system edit configmap/coredns
Please edit the object below. Lines beginning with a '#' will be ignored,
and an empty file will abort the edit. If an error occurs while saving this file will be
reopened with the relevant failures.
apiVersion: v1
data:
Corefile: ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n
\ log . {\n class error\n }\n kubernetes cluster.local in-addr.arpa
ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n }\n
\ prometheus :9153\n forward . 16.110.135.51 16.110.135.52 8.8.8.8 \n cache
30\n loop\n reload\n loadbalance\n}\n"
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"Corefile":".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n log . {\n class error\n }\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n }\n prometheus :9153\n forward . 16.110.135.51 16.110.135.52 8.8.8.8 \n cache 30\n loop\n reload\n loadbalance\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists","k8s-app":"kube-dns"},"name":"coredns","namespace":"kube-system"}}
creationTimestamp: "2021-04-26T20:17:07Z"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
k8s-app: kube-dns
name: coredns
namespace: kube-system
resourceVersion: "1368532"
selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
uid: 6f40bcdc-aefc-42e2-97d4-a3a08db96c2e
The text was updated successfully, but these errors were encountered: