Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker fails with 127.0.0.1 in resolv.conf #6388

Closed
alexlarsson opened this Issue Jun 12, 2014 · 21 comments

Comments

Projects
None yet
@alexlarsson
Copy link
Contributor

alexlarsson commented Jun 12, 2014

If the host resolv.conf has a nameserver in 127.0.0.1 this will not work in the container, as it will mean the container-local loopback inside the container.

Using a 127.0.0.1 dns proxy is not unusual. For instance, fedora plans to add one by defaul in Fedora 22, in order to support dnssec better. See https://fedoraproject.org/wiki/Changes/Default_Local_DNS_Resolver for details.

I think docker needs to rewrite the resolv.conf file in order to handle this. Some possible solutions:

  • If 127.0.0.1 is in resolv.conf, forward the port form the container loopback to the host loopback
  • Set up a fixed resolv.conf in the container that points to some fixed ip in the container network range. Then forward this to whatever the host resolv.conf says.
@rhatdan

This comment has been minimized.

Copy link
Contributor

rhatdan commented Jun 17, 2014

How hard would this be to do? Can this be done with Firewall rules?

@alexlarsson

This comment has been minimized.

Copy link
Contributor Author

alexlarsson commented Jun 17, 2014

@rhatdan I don't really know, networking is not my expertise.

@tianon

This comment has been minimized.

Copy link
Member

tianon commented Jun 17, 2014

/me invokes the power of @jpetazzo who sounded like he had a good answer to a similar question on another issue

@adelton

This comment has been minimized.

Copy link
Contributor

adelton commented Jun 17, 2014

My plan is to prepare pull request to add support for --dns host and dns link:link-name options which would translate to public IP address on the host and IP address of the link, respectively.

@alexlarsson

This comment has been minimized.

Copy link
Contributor Author

alexlarsson commented Jun 17, 2014

@adelton public ip on host is probably not a great thing. First of all there may be several public ips and secondly they may change over time while the container runs.

I think the idea of hardcoding "nameserver 172.17.0.2" in all containers and then setting up a ip forward from 172.17.0.2:53 in the container to whatever is on the host resolv.conf on the host side (or the dns server you manually specified). This would work for 127.0.0.1 on the host, but it would also allow cool things like changing the dns server in running containers if the host changes to a different network with a different dns server.

@jpetazzo

This comment has been minimized.

Copy link
Contributor

jpetazzo commented Jun 17, 2014

If Docker detects that your're using 127.0.0.1, it will replace it with 8.8.8.8 and 8.8.4.4:

https://github.com/dotcloud/docker/blob/51b188c5102e86ad453c933077bcaf9594070c28/daemon/daemon.go#L1089

Isn't that the behavior that you're seeing now?

@rhatdan

This comment has been minimized.

Copy link
Contributor

rhatdan commented Jun 17, 2014

Well this is not what we want. If the fedora feature implements DNSSEC, we would want containers to take advantage of the DNSSEC Service on the host.

@jpetazzo

This comment has been minimized.

Copy link
Contributor

jpetazzo commented Jun 17, 2014

I totally agree.

Short term workaround: make sure that your local resolver also listens on the bridge address; then start the daemon (or individual containers) with --dns <bridgeaddr>.

A bit better: allow special values for the --dns parameter:

  • --dns bridge would automatically use the bridge address
  • --dns nat:1.2.3.4 would use the bridge address + install an iptables rule (in the host) to redirect the DNS traffic to IP address 1.2.3.4 (127.0.0.1 should work here, to be confirmed); by default it could be 127.0.0.1

Special use-case (I don't know if we want that, but that would help this, and skydock as well): allow --dns container:<name>; then that reserves an IP address, uses it as the DNS server, and whenever a container is started with that same name, the IP address is used.

I think there are also some interesting hacks with libchan that we might use here, but I don't know if that's the way we want to go. /cc @shykes

@spacekpe

This comment has been minimized.

Copy link

spacekpe commented Jun 18, 2014

I think that --dns bridge should be the preferred option. Please do not add hacks like "redirect all traffic on port 53 to different IP address" etc.

Hacks like that break some setups (because you need to directly communicate with particular server, e.g. for DNS update) and are hard to debug. I would rather not add support for such hacks. --dns bridge and --dns container:<name> cover all cases I'm able to think of.

@dmp42 dmp42 removed the Distribution label Aug 14, 2014

estesp added a commit to estesp/docker that referenced this issue Sep 24, 2014

Allow --dns "bridge" option to point DNS at bridge IP address
Addresses moby#6388

Docker-DCO-1.1-Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com> (github: estesp)
@estesp

This comment has been minimized.

Copy link
Contributor

estesp commented Sep 26, 2014

There was not much interest in the --dns bridge option based on the feedback to PR #8221 a few days ago. Considering that, I'm wondering whether we actually want a new option versus just "doing the right thing" when, during container network setup, the loopback address is found as the nameserver/resolver on the host. Instead of asking the user to pass special flags, it would seem to make sense to translate that to the bridge IP on the fly and possibly just provide some notification (e.g. "WARNING: Loopback DNS resolver found, adding bridge IP as container DNS resolver")? The wrinkle is solving the issue of whether the loopback resolver is listening on docker0 (or the specified bridge interface). It could be added to the documentation/configuration discussion about DNS setup, but could generate future issues/support headaches about DNS not working in containers for those who don't like to read docs.

@joemiller

This comment has been minimized.

Copy link

joemiller commented Sep 26, 2014

@estesp In our case, our local resolvers are not listening on the bridge IP and can't easily be configured to. If docker could specifically listen itself (or fork a helper) on the bridge IP and forward to 127.0.0.1:53 that would work.

@rhatdan

This comment has been minimized.

Copy link
Contributor

rhatdan commented Sep 27, 2014

There are ways to setup iptables to forward the packets, I have been told.

@joemiller

This comment has been minimized.

Copy link

joemiller commented Sep 27, 2014

@rhatdan I've thought about the iptables approach as well but my iptables-fu is weak and I couldn't find something that worked. I'm hesitant to go this route as it is sort of magical and hard to debug, IMO, for non-iptables experts in the org (inc. myself.) If it was managed by docker itself though, I would be more comfortable with it.

The approach I am considering at the moment is to run a simple dnsmasq instance with cache disabled that simply proxies dns to 127.1:53. Since our platform is systemd-based we can easily ensure the dnsmasq proc starts after docker so that docker creates the docker0 bridge first.

The dnsmasq.conf would look like this:

interface=docker0
except-interface=lo
port=53
server=127.0.0.1#53
no-hosts
bind-interfaces
dns-forward-max=1024
cache-size=0
no-negcache

This approach also does assume that the bridge will be 172.17.42.1 and configures docker to start with --dns 172.17.42.1 so it is not optimal.

If startup ordering is an issue, one could also manually manage the docker0 bridge and enforce a specific IP, then use listen-address=172.17.42.1 in the config instead of interface=docker0.

Wondering if anyone else has tried this approach or has a working iptables config that accomplishes a similar "proxying" of 172.17.42.1:53 -> 127.0.0.1:53 on the host.

@adelton

This comment has been minimized.

Copy link
Contributor

adelton commented Oct 9, 2014

Considering that, I'm wondering whether we actually want a new option versus just "doing the right thing" when, during container network setup, the loopback address is found as the nameserver/resolver on the host.

But the user can use --dns 127.0.0.1 as a way to point the resolv.conf in the container to the localhost in the container (because it is running bind, for example). There's no way really to do the right thing without additional options for the 127.0.0.1 value -- it can mean both 127.0.0.1 on the host or 127.0.0.1 in the container.

swagiaal added a commit to swagiaal/docker that referenced this issue Nov 19, 2014

Use bridge address to reach local nameserver on the host.
Adresses moby#6388. If the host resolv.conf contains only a local
nameserver allow the container to take advantage of that instead
of using the default nameservers.

Signed-off-by: Sami Wagiaalla <swagiaal@redhat.com>

swagiaal added a commit to swagiaal/docker that referenced this issue Nov 19, 2014

Use bridge address to reach local nameserver on the host.
Adresses moby#6388. If the host resolv.conf contains only a local
nameserver allow the container to take advantage of that instead
of using the default nameservers.

Signed-off-by: Sami Wagiaalla <swagiaal@redhat.com>

swagiaal added a commit to swagiaal/docker that referenced this issue Nov 19, 2014

Use bridge address to reach local nameserver on the host.
Adresses moby#6388. If the host resolv.conf contains only a local
nameserver allow the container to take advantage of that instead
of using the default nameservers.

Signed-off-by: Sami Wagiaalla <swagiaal@redhat.com>

swagiaal added a commit to swagiaal/docker that referenced this issue Nov 26, 2014

Use bridge address to reach local nameserver on the host.
Adresses moby#6388. If the host resolv.conf contains only a local
nameserver allow the container to take advantage of that instead
of using the default nameservers.

Signed-off-by: Sami Wagiaalla <swagiaal@redhat.com>
@jessfraz

This comment has been minimized.

Copy link
Contributor

jessfraz commented Feb 26, 2015

ping @estesp I think this should be fixed right

@estesp

This comment has been minimized.

Copy link
Contributor

estesp commented Feb 26, 2015

Yes, replacing localhost DNS has been fixed for quite awhile--replaced with Google DNS as noted in prior comments.

For those that want to take advantage of a localhost resolver on the host, there are various ideas in the comments above, but at this point there has been no strong interest to try and make Docker handle the iptables/pass-through work to get DNS requests from container -> localhost on Docker daemon/host. Clearly "127.0.0.1" as a resolver in the containers /etc/resolv.conf is not viable as that just means the container network namespace localhost itself; which is why it gets replaced if it exists.

@jessfraz jessfraz closed this Feb 26, 2015

@pjps

This comment has been minimized.

Copy link

pjps commented Jul 13, 2015

iptables(8) DNAT comes handy to enable containers to communicate with the local resolver on the host.

  1. Enable local 'lo' routing via 'docker0' bridge interface. (it is off by default)
    # sysctl -w net.ipv4.conf.docker0.route_localnet=1
  2. Enable local resolver to accept requests from 172.17.0.0/16 docker sub-network.
    unbound(8): # vi /etc/unbound/unbound.conf -> access-control: 172.17.0.0/16 allow
    ndjbdns(8): # touch /etc/ndjbdns/ip/172.17
  3. Use iptables(8) destination nat(DNAT) feature to divert DNS traffic from 'docker0' to 'lo' interface.
    # iptables -t nat -I PREROUTING -p UDP -s 172.17.0.0/16 --dport 53 -i docker0 -j DNAT --to-destination 127.0.0.1:53

It'll greatly help if the Docker daemon could conditionally add/remove above configuration when the host lists localhost(127.0.0.1) as its name server. Would it be possible to make such a change to the daemon?

Thank you.

@rhatdan

This comment has been minimized.

Copy link
Contributor

rhatdan commented Jul 14, 2015

Might be better to open a new issue rather then here. But I will leave that to the docker maintainers to decide.

@jfrazelle @crosbymichael @jpetazzo @tianon

@pjps

This comment has been minimized.

Copy link

pjps commented Jul 14, 2015

Yes, I just wrote to @jfrazelle little while back about the same.

@pjps

This comment has been minimized.

Copy link

pjps commented Jul 14, 2015

New issue -> #14627

@CharlieR-o-o-t

This comment has been minimized.

Copy link

CharlieR-o-o-t commented Jan 8, 2019

There is another bug appeared after PR.

I use cached DNS resolver at my hostmachine. And there are custom DnsSearch params for each container.

My container resolv.conf looks like this:

search foo.com
nameserver 127.0.0.11
options timeout:2 ndots:0

Hostmachine resolv.conf

nameserver 127.0.0.1
nameserver <my_dns_server_ip>
nameserver <my_dns_server_ip2>

components/engine/vendor/github.com/docker/libnetwork/sandbox_dns_unix.go:218

	if len(sb.config.dnsList) > 0 || len(sb.config.dnsSearchList) > 0 || len(sb.config.dnsOptionsList) > 0 {
...
		// After building the resolv.conf from the user config save the
		// external resolvers in the sandbox. Note that --dns 127.0.0.x
		// config refers to the loopback in the container namespace
		sb.setExternalResolvers(newRC.Content, types.IPv4, false)
	} else {
		// If the host resolv.conf file has 127.0.0.x container should
		// use the host restolver for queries. This is supported by the
		// docker embedded DNS server. Hence save the external resolvers
		// before filtering it out.
		sb.setExternalResolvers(currRC.Content, types.IPv4, true)

With dnsOpt/dnsSearch container always trying to reach DNS (127.0.0.1) inside container NS. Workaround needed in this part.

Solution
If docker DNS resolver in use (127.0.0.11) loopback dns usage should be always handled at hostmachine namespace.

I can prepare PR if you agree with that.
@thaJeztah, could you take a look and reopen issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.