-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
coreDNS unable to resolve upstream #53
Comments
This is the CloudFlare DNS public service, like the Google DNS 8.8.8.8 |
Hmm, its probably being blocked on my network. Any idea how its being configured and how I could change it? |
@latchmihay We may have hard coded 1.1.1.1. We will make that configurable. The default behavior of k8s is to use the hosts /etc/resolv.conf as the upstream DNS but because of systemd-resolved being the default these days (and older dnsmasq setups) it is typically 127.0.0.x IP and then breaks. So it's super hard in general to figure out what the upstream DNS should actually be. So we probably hardcoded it to 1.1.1.1. We will add this as an option to the agent and also document it. |
@ibuildthecloud Please maintain the current way, make configurable, but if you keep save a log of time because of the issue that you said, I've got it many time, and need be a new step on the new servers installations... |
i fixed it changing the configmap for coredns from 1.1.1.1 to 8.8.8.8 ... for whatever reason |
This can be done by replacing |
We have created a release candidate v0.3.0-rc3 which will hopefully fix these DNS issues. Please try it out and let me know if it helps! The settings are configurable in that we will either take a --resolv-conf flag to pass down to the kubelet, or a K3S_RESOLV_CONF environment variable will work also. We now try to use system resolv.conf files (from /etc & systemd), and will create a /tmp/k3s-resolv.conf file with nameserver 8.8.8.8 if nameservers in the system files are not global unicast ips. |
I tried it out on Ubuntu 16.04.6 LTS with
But when I try
Using other machines in the same networks that use
Update: It's a Kubernetes issueFound out that this was caused by Kubernetes ndots config. Per default, we have But I assume that this should speed up things for internal cluster dns entrys. But for external dns, it can slow down things. Using To solve this, customize the pods dns configuration by applying it to
But regarding to Kubernetes own dns, I'd consider this as a workaround for local purpose since I'm not completely aware of the productive peformance yet. As another solution, we may force absolute domain names by a leading dot. Currently, I'm using the |
Is there any way to set the |
For anyone arriving here from a search engine, I was able to resolve my cluster's DNS issues by (a) using the legacy iptables rather than nftables, (b) ensuring the CNI is correctly installed (I use Calico with hardware that has multiple NICs and this requires additional setup for IP detection), and (c) flushing the iptables leftover from the CNI in between cluster installs. iptables --version
# iptables v1.8.7 (legacy) iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
iptables -t nat -F
iptables -t mangle -F
iptables -F
iptables -X
# ... Install k3s |
Not completely on topic, but the fact that issue 53 is related to DNS issues sounds done on purpose :) |
Hello, I have plain installation of k3s on an ubuntu 18.04
I am running a container which is failing to resolve DNS
I am not sure what 1.1.1.1 is and where its coming from
The text was updated successfully, but these errors were encountered: