-
Notifications
You must be signed in to change notification settings - Fork 467
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubedns dnsmasq pod fails with blank IP for the DNS host #66
Comments
kube-dns ConfgiMap won't affect And seems like you've got a empty pod IP for this kube-dns pod. Can you also check the output of It would be helpful if you could also check kubelet's logs on the node where kube-dns pod is on. |
I see. Other pods don't seem to have this problem. Looking on kube-proxy for example has a complete hosts file. The kube-dns pod's yaml is attached. |
These lines from syslog on the master (the only node so far) are interesting:
|
And indeed, ifconfig in the kubedns container has only "lo":
|
Hmm, could you check other pods that are not using hostNetwork (kube-proxy set hostNetwork=true)? From your yaml file it seems like it does not have podIP allocated. I wonder is the underlying network broken? Here is a similar issue: https://github.com/projectcalico/calicoctl/issues/1407 |
Close. Due to a docker bug, the dns pod couldn't be spun up (couldn't write memory limits properly). Fixing that, I get a little closer, but it can't seem to reach what I'm guessing is the API server. kubeadm sets this up on port 6443, but this isn't what kube-dns is trying to reach:
|
Yeah, it was trying to reach the apiserver through the default kubernetes service but failed. Not sure how kubeadm set this up, but as I know kube-dns either use the kubecfg file specified by |
Its the env vars, and they're wrong. What controls setting those? |
These values are piped into kubelet through I'm not pretty sure if the real issue is about "incorrect kubernetes service IP/port" or not. Feel like there may be some mechanism in kubeadm I misunderstood. You may want to open an issue against kubeadm to seek for the real root cause. |
Oops, looks like I am wrong. Those env vars are generated from services themselves but not from flags: |
I agree, I don't think kube-dns has anything to do with this, closing. |
Can't get a cluster started using kubeadm on v1.6.0-beta1 (kube-dns v1.14.1). The dnsmasq pod fails with:
/etc/hosts contains:
So presumably it can't get the information it needs from the ConfigMap?
The text was updated successfully, but these errors were encountered: