Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Services that come up in pods are unable to resolve DNS /sig dns #80012

Closed
RajaSureshAditya opened this issue Jul 11, 2019 · 4 comments

Comments

Projects
None yet
3 participants
@RajaSureshAditya
Copy link

commented Jul 11, 2019

HI i have created a Kubernetes Cluster using kubeadm and choosen cni as weave network ,unable to resolve created service from the pods ,not able to understand where i went wrong .

[root@k8s-master postgres]# kubectl get svc,rc,pod --namespace=kube-system -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 7d12h k8s-app=kube-dns

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/coredns-5c98db65d4-8wsgl 1/1 Running 0 42m 10.36.0.2 worker-node2
pod/coredns-5c98db65d4-pgx68 1/1 Running 0 42m 10.44.0.5 worker-node1
pod/etcd-k8s-master 1/1 Running 1 7d12h 10.196.155.29 k8s-master
pod/kube-apiserver-k8s-master 1/1 Running 0 7d12h 10.196.155.29 k8s-master
pod/kube-controller-manager-k8s-master 1/1 Running 0 7d12h 10.196.155.29 k8s-master
pod/kube-proxy-2k2r5 1/1 Running 0 7d12h 10.196.155.29 k8s-master
pod/kube-proxy-h76dp 1/1 Running 0 7d12h 10.196.155.28 worker-node1
pod/kube-proxy-j6n4x 1/1 Running 0 6d19h 10.196.155.27 worker-node2
pod/kube-scheduler-k8s-master 1/1 Running 0 7d12h 10.196.155.29 k8s-master
pod/weave-net-9hmjb 2/2 Running 0 7d12h 10.196.155.28 worker-node1
pod/weave-net-dcw67 2/2 Running 0 7d12h 10.196.155.29 k8s-master
pod/weave-net-jv6pk 2/2 Running 0 6d19h 10.196.155.27 worker-node2

[root@k8s-master postgres]# cat /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf

Note: This dropin only works with kubeadm and kubelet v1.11+

[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"

This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically

EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env

This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use

the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.

EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

unable to find the solution ,i cannot reset the cluster now

[root@k8s-master postgres]# ifconfig
datapath: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1376
inet6 fe80::d0d0:b8ff:feac:c751 prefixlen 64 scopeid 0x20
ether d2:d0:b8:ac:c7:51 txqueuelen 1000 (Ethernet)
RX packets 923 bytes 104240 (101.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8 bytes 656 (656.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:a2ff:feeb:b16e prefixlen 64 scopeid 0x20
ether 02:42:a2:eb:b1:6e txqueuelen 0 (Ethernet)
RX packets 3161 bytes 213246 (208.2 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 5272 bytes 12288227 (11.7 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.196.155.29 netmask 255.255.255.224 broadcast 10.196.155.31
inet6 fe80::250:56ff:fe99:4c30 prefixlen 64 scopeid 0x20
ether 00:50:56:99:4c:30 txqueuelen 1000 (Ethernet)
RX packets 7683343 bytes 5300435196 (4.9 GiB)
RX errors 0 dropped 12 overruns 0 frame 0
TX packets 8844229 bytes 6480106001 (6.0 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 1000 (Local Loopback)
RX packets 105977033 bytes 17630223871 (16.4 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 105977033 bytes 17630223871 (16.4 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

vethwe-bridge: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1376
inet6 fe80::8f9:e7ff:fe59:6a23 prefixlen 64 scopeid 0x20
ether 0a:f9:e7:59:6a:23 txqueuelen 0 (Ethernet)
RX packets 29940 bytes 2951428 (2.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 25565 bytes 8425973 (8.0 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

vethwe-datapath: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1376
inet6 fe80::7c18:e3ff:fe8b:292b prefixlen 64 scopeid 0x20
ether 7e:18:e3:8b:29:2b txqueuelen 0 (Ethernet)
RX packets 105977033 bytes 17630223871 (16.4 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 105977033 bytes 17630223871 (16.4 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

vxlan-6784: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 65535
inet6 fe80::c8b0:9fff:fe96:7f5c prefixlen 64 scopeid 0x20
ether ca:b0:9f:96:7f:5c txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1242924 bytes 1710259380 (1.5 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

weave: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1376
inet 10.32.0.1 netmask 255.240.0.0 broadcast 10.47.255.255
inet6 fe80::843d:bfff:fee3:7cef prefixlen 64 scopeid 0x20
ether 86:3d:bf:e3:7c:ef txqueuelen 1000 (Ethernet)
RX packets 3137760 bytes 201456738 (192.1 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2817944 bytes 844694754 (805.5 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

/sig dns

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

commented Jul 11, 2019

@RajaSureshAditya: There are no sig labels on this issue. Please add a sig label by either:

  1. mentioning a sig: @kubernetes/sig-<group-name>-<group-suffix>
    e.g., @kubernetes/sig-contributor-experience-<group-suffix> to notify the contributor experience sig, OR

  2. specifying the label manually: /sig <group-name>
    e.g., /sig scalability to apply the sig/scalability label

Note: Method 1 will trigger an email to the group. See the group list.
The <group-suffix> in method 1 has to be replaced with one of these: bugs, feature-requests, pr-reviews, test-failures, proposals.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@RajaSureshAditya

This comment has been minimized.

Copy link
Author

commented Jul 11, 2019

@kubernetes/sig-contributor-experience-dns

@chrisohaver

This comment has been minimized.

Copy link
Contributor

commented Jul 11, 2019

Have you tried the troubleshooting steps in the Kubernetes Docs?

https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/

@RajaSureshAditya

This comment has been minimized.

Copy link
Author

commented Jul 16, 2019

Yes its working fine now , i have updated my /etc/resolv.conf and delete the core dns pods ,then its working fine

thanks chrisohaver

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.