-
Notifications
You must be signed in to change notification settings - Fork 38.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dns can't resolve kubernetes.default and/or cluster.local #66924
Comments
|
/sig cli |
|
Actually the same problem is on newest MacOS, when I run busybox pod and nslookup the domain: |
This means that the health container is getting no response from dnsmasq container - which is all via local addresses in the same pod - so shouldn't be a k8s level networking issue. Edit: Actually - looking at the time stamps on the logs, its not clear, all the connection refusal messages were before dnsmasq was listening... so those messages are expected. Presumably they stopped after 17:53:35? |
|
How about my MacOS (docker-for-desktop) issue, do you know why I'm getting: $ kubectl -n default exec -ti busybox nslookup kubernetes.default $ kubectl -n default exec -ti busybox nslookup svc.cluster.local $ kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name: $ kubectl logs --namespace=kube-system kube-dns-86f4d74b45-b4dd8 -c kubedns: $ kubectl logs --namespace=kube-system kube-dns-86f4d74b45-b4dd8 -c dnsmasq: $ kubectl logs --namespace=kube-system kube-dns-86f4d74b45-b4dd8 -c sidecar: |
|
Try querying one of the the kube-dns pods directly, to see if it's a network layer issue... e.g.
|
|
$ kubectl -n kube-system get pods | grep dns: $ kubectl -n default exec -ti busybox nslookup kubernetes.default 10.1.0.3: $ kubectl -n default exec -ti busybox nslookup cluster.local 10.1.0.3: $ kubectl -n kube-system describe pod kube-dns-86f4d74b45-b4dd8: |
|
I just noticed you are running coredns and kube-dns in parallel... Can you query the coredns pods directly via pod IP? |
On my MacOS I have only one pod with
|
|
Any solution on this? Also having this problem |
|
I am facing the same issue, running core-dns in a kubeadm cluster. |
|
It looks like DNS inside busybox does not work properly. |
|
@gogene Ok - version Btw. do you know why it can't resolve |
THX |
…akes DNS resolution slow - see kubernetes/kubernetes#66924
Fixing version for busybox as DNS for busybox doesnt from version > 1.28.4 For more details refer here: kubernetes/kubernetes#66924 (comment)
Fixing version for busybox as DNS for busybox doesnt from version > 1.28.4 For more details refer here: kubernetes/kubernetes#66924 (comment)
|
In my case it was a missing IP tables rule on a dedicated server. Resolved by executing on the server: |
|
@gogene P.S. In 2020.08, the 1.32.0 still has problem in nslookup. (2 years has passed...) |
|
there are two reason caused this issue:
|
Changes where done with these commands: reprec 'image: busybox(?!:)' 'image: busybox:1.28' */docs */examples reprec -- '--image=busybox(?!:)' '--image=busybox:1.28' */docs */examples Related issues: docker-library/busybox#48 kubernetes/kubernetes#66924
|
I tried @plantegg solution adding I also ran nslookup for
Below shell script I used to calculate this result. echo 'while(true); do
nslookup -timeout=2 kubernetes > /dev/null 2>&1
result=$?
if [ "$result" == "0" ]; then
echo "$(date +%s) : $result : pass" >> /tmp/nslookup_status
elif [ "$result" == "1" ]; then
echo "$(date +%s) : $result : fail" >> /tmp/nslookup_status
else
echo "$(date +%s) : $result : fail" >> /tmp/nslookup_status
fi
done' > nslookup_status.sh
chmod +x nslookup_status.sh
./nslookup_status.sh &busybox-pod.yaml apiVersion: v1
kind: Pod
metadata:
name: "busybox1"
spec:
containers:
- image: busybox
name: busybox
command: [ "sleep","6000"]
dnsConfig:
options:
- name: ndots
value: "7"busybox Image hash : |
|
Just for the records, I opened a new issue at the bugtracker of busybox: https://bugs.busybox.net/show_bug.cgi?id=14671 |
/kind bug
What happened:
I've setup Kubernetes cluster on Ubuntu 18.04, v1.11.1:
KubeDNS:
Version:
When I run busybox for testing:
I am getting this:
What you expected to happen:
I expect the kubernetes.default or cluster.local to be resolved.
How to reproduce it (as minimally and precisely as possible):
Maybe try to install new k8s cluster on Ubuntu 18.04 following official instructions.
Anything else we need to know?:
Environment:
kubectl version):Bare metal, OVH, Ubuntu 18.04
uname -a):These are my pods:
Here are pod logs:
The text was updated successfully, but these errors were encountered: