Enable Docker to connect with the local resolver on the host 127.0.0.1:53 #14627

Closed
pjps opened this Issue Jul 14, 2015 · 24 comments

Comments

Projects
None yet
@pjps

pjps commented Jul 14, 2015

When localhost(127.0.0.1) is the only entry in the host's '/etc/resolv.conf' file, Docker defaults to using Google public DNS servers inside containers. It is not the best solution for various reasons, ex. it can not resolve internal domain names.

With Fedora introducing default local(127.0.0.1:53) DNSSEC resolver on the host, it would the best option for container applications to take full advantage of the same. Currently, there is no way for the Docker containers to talk to the local resolver on the host. 👎

To enable Docker containers to communicate with the local(127.0.0.1:53) resolver on the host, one would need to make following configuration changes on the host.

  • Enable local lo routing via docker0 bridge interface. (it is off by default)
    # sysctl -w net.ipv4.conf.docker0.route_localnet=1
  • Enable local resolver to accept requests from 172.17.0.0/16 docker sub-network.
    unbound(8): # vi /etc/unbound/unbound.conf -> access-control: 172.17.0.0/16 allow
    ndjbdns(8): # touch /etc/ndjbdns/ip/172.17
  • Use iptables(8) destination nat(DNAT) to divert DNS traffic from docker0 to lo interface.
    # iptables -t nat -I PREROUTING -p UDP -s 172.17.0.0/16 --dport 53 -i docker0 -j DNAT --to-destination 127.0.0.1:53

Docker daemon is best placed to make above changes on the host. It'll greatly help if the Docker daemon could conditionally make above configurations when the host lists only 127.0.0.1 as its name server.

Could the Docker daemon be updated to make above changes please?

Thank you.

@estesp

This comment has been minimized.

Show comment
Hide comment
@estesp

estesp Jul 14, 2015

Contributor

Thanks for this additional information about localhost DNS access and the specific ways it can be enabled. Something important that has happened since the last time this was discussed is the incorporation of libnetwork as the network implementation for Docker, where docker0 bridge networking is now just a default, but definitely not the only option, for container networking.

The request for any changes to how bridge networking operates will end up, if proposed and accepted, as changes to the docker/libnetwork Github repo, so I'm going to ping the maintainers there and maybe they can give you guidance on whether this is something they want to consider for libnetwork's bridge mode or not. @mrjana @mavenugo can you take a look?

Contributor

estesp commented Jul 14, 2015

Thanks for this additional information about localhost DNS access and the specific ways it can be enabled. Something important that has happened since the last time this was discussed is the incorporation of libnetwork as the network implementation for Docker, where docker0 bridge networking is now just a default, but definitely not the only option, for container networking.

The request for any changes to how bridge networking operates will end up, if proposed and accepted, as changes to the docker/libnetwork Github repo, so I'm going to ping the maintainers there and maybe they can give you guidance on whether this is something they want to consider for libnetwork's bridge mode or not. @mrjana @mavenugo can you take a look?

@thozza

This comment has been minimized.

Show comment
Hide comment
@thozza

thozza Jul 14, 2015

Hi.

I would like to note that enabling the use of 127.0.0.1 from host's resolv.conf should be really considered. There are networks where outbound DNS queries are blocked from all hosts on the network except whitelisted ones. The DNS resolver running on localhost can be configured to talk to such whitelisted resolvers. So not using it and using some public DNS resolvers on the Internet may effectively break DNS resolution inside the container.

Hope you'll consider the change and that we can find some implementation that will work and can be merged.

thozza commented Jul 14, 2015

Hi.

I would like to note that enabling the use of 127.0.0.1 from host's resolv.conf should be really considered. There are networks where outbound DNS queries are blocked from all hosts on the network except whitelisted ones. The DNS resolver running on localhost can be configured to talk to such whitelisted resolvers. So not using it and using some public DNS resolvers on the Internet may effectively break DNS resolution inside the container.

Hope you'll consider the change and that we can find some implementation that will work and can be merged.

@estesp

This comment has been minimized.

Show comment
Hide comment
@estesp

estesp Jul 14, 2015

Contributor

@thozza definitely a reasonable point, but I think one thing that came up in prior discussions is we should not make this a binary either-or discussion between 127.0.0.1 and public DNS servers on the internet. There are plenty of solutions in-between those two extremes, including having the "localhost" resolver listen on docker0's IP address or having a real internal DNS server in the host's /etc/resolv.conf. The only reason Docker removes the localhost address is because there is no guarantee that the steps the initial issue reporter noted about making the hosts's lo reachable have been taken, and for Docker to do so carte blanche means potentially stepping on the container's lo interface, which may actually be useful to the container (remember that a container has a "127.0.0.1" address on lo as well as the host).

That being said, discussing the options here is a reasonable step and, as I mentioned above, will now have to include the libnetwork maintainers as the Docker engine no longer has any responsibility for maintaining /etc/resolv.conf in the container.

Contributor

estesp commented Jul 14, 2015

@thozza definitely a reasonable point, but I think one thing that came up in prior discussions is we should not make this a binary either-or discussion between 127.0.0.1 and public DNS servers on the internet. There are plenty of solutions in-between those two extremes, including having the "localhost" resolver listen on docker0's IP address or having a real internal DNS server in the host's /etc/resolv.conf. The only reason Docker removes the localhost address is because there is no guarantee that the steps the initial issue reporter noted about making the hosts's lo reachable have been taken, and for Docker to do so carte blanche means potentially stepping on the container's lo interface, which may actually be useful to the container (remember that a container has a "127.0.0.1" address on lo as well as the host).

That being said, discussing the options here is a reasonable step and, as I mentioned above, will now have to include the libnetwork maintainers as the Docker engine no longer has any responsibility for maintaining /etc/resolv.conf in the container.

@pjps

This comment has been minimized.

Show comment
Hide comment
@pjps

pjps Jul 20, 2015

Hello Phil,

The only reason Docker removes the localhost address is because
there is no guarantee that the steps the initial issue reporter noted
about making the hosts's lo reachable have been taken,

True; Idea is if Docker OR libnetwork now, could add the required rules to make 'lo' reachable, that would be much seamless.

@mrjana @mavenugo: did you have chance to look at this issue? (just checking)

pjps commented Jul 20, 2015

Hello Phil,

The only reason Docker removes the localhost address is because
there is no guarantee that the steps the initial issue reporter noted
about making the hosts's lo reachable have been taken,

True; Idea is if Docker OR libnetwork now, could add the required rules to make 'lo' reachable, that would be much seamless.

@mrjana @mavenugo: did you have chance to look at this issue? (just checking)

@mrjana

This comment has been minimized.

Show comment
Hide comment
@mrjana

mrjana Jul 20, 2015

Contributor

@pjps @estesp @thozza While I do agree that we have to make this use case work within the container because a) this is a valid use case b) it helps in container security over all, I just don't think we should solve this problem at L3 level by making host lo reachable from within the container. IMO it will break a bunch of applications which might be relying on the internal lo connectivity(I already know a few use cases who use that).

So as an alternative I propose that this should be solved at the DNS layer. To be more specific when we inherit a resolv.conf configuration from the host which contains nameserver entries in the loopback address range then we can create a DNS proxy socket inside every container which wants this and run a DNS proxy which forwards this DNS requests to the real Local Resolver running in the host. With this design the dns proxy itself doesn't need to per-container. There can be one dns-proxy which listens on many sockets one-socket-per-container-namespace. When a container is spawned either it's namespace path can be provided to DNS proxy so that it can open a socket inside that namespace or we can open it in libnetwork and transfer the opened socket using SCM_RIGHTS ancillary data in a UNIX domain socket.

This provides a much cleaner solution without breaking the inside-container loopback connectivity.

Also this augurs well with our future libnetwork plan to provide a framework for external DNS plugin api so that any external entity can act as this DNS proxy in the form of an external DNS plugin. In fact the Local DNS resolver itself can incorporate the api to become that DNS plugin.

Let me know what you guys think?

Contributor

mrjana commented Jul 20, 2015

@pjps @estesp @thozza While I do agree that we have to make this use case work within the container because a) this is a valid use case b) it helps in container security over all, I just don't think we should solve this problem at L3 level by making host lo reachable from within the container. IMO it will break a bunch of applications which might be relying on the internal lo connectivity(I already know a few use cases who use that).

So as an alternative I propose that this should be solved at the DNS layer. To be more specific when we inherit a resolv.conf configuration from the host which contains nameserver entries in the loopback address range then we can create a DNS proxy socket inside every container which wants this and run a DNS proxy which forwards this DNS requests to the real Local Resolver running in the host. With this design the dns proxy itself doesn't need to per-container. There can be one dns-proxy which listens on many sockets one-socket-per-container-namespace. When a container is spawned either it's namespace path can be provided to DNS proxy so that it can open a socket inside that namespace or we can open it in libnetwork and transfer the opened socket using SCM_RIGHTS ancillary data in a UNIX domain socket.

This provides a much cleaner solution without breaking the inside-container loopback connectivity.

Also this augurs well with our future libnetwork plan to provide a framework for external DNS plugin api so that any external entity can act as this DNS proxy in the form of an external DNS plugin. In fact the Local DNS resolver itself can incorporate the api to become that DNS plugin.

Let me know what you guys think?

@rhatdan

This comment has been minimized.

Show comment
Hide comment
@rhatdan

rhatdan Jul 20, 2015

Contributor

I like this idea, But it seems like it would be a lot of work, and require multiple moving parts to get right.

Contributor

rhatdan commented Jul 20, 2015

I like this idea, But it seems like it would be a lot of work, and require multiple moving parts to get right.

@mrjana

This comment has been minimized.

Show comment
Hide comment
@mrjana

mrjana Jul 20, 2015

Contributor

@rhatdan It is definitely a bit more work than just setting ip some iptable rules and setting some kernel networking knobs. But it doesn't have to be a lot more work and we can start with something simple which just gets the job done initially (before adding any bells and whistles interms of apis and options) and all of them can be contained within the libnetwork code base.

Contributor

mrjana commented Jul 20, 2015

@rhatdan It is definitely a bit more work than just setting ip some iptable rules and setting some kernel networking knobs. But it doesn't have to be a lot more work and we can start with something simple which just gets the job done initially (before adding any bells and whistles interms of apis and options) and all of them can be contained within the libnetwork code base.

@rhatdan

This comment has been minimized.

Show comment
Hide comment
@rhatdan

rhatdan Jul 20, 2015

Contributor

@mrjana Sounds fine. We are facing a potential Deadline for Fedora 23, if the Secure DNS proposal gets accepted, then Fedora will default to 127.0.0.1 for its resolv.conf, end of the year time frame, I believe.

Contributor

rhatdan commented Jul 20, 2015

@mrjana Sounds fine. We are facing a potential Deadline for Fedora 23, if the Secure DNS proposal gets accepted, then Fedora will default to 127.0.0.1 for its resolv.conf, end of the year time frame, I believe.

@pjps

This comment has been minimized.

Show comment
Hide comment
@pjps

pjps Jul 21, 2015

Hello @mrjana

IMO it will break a bunch of applications which might be relying
on the internal lo connectivity(I already know a few use cases who use that).

Could you please elaborate on this breakage please? How does it happen?? (just curious)

This provides a much cleaner solution without breaking the inside-container loopback connectivity.
Also this augurs well with our future libnetwork plan to provide a framework for external DNS plugin
api so that any external entity can act as this DNS proxy in the form of an external DNS plugin.
In fact the Local DNS resolver itself can incorporate the api to become that DNS plugin.
Let me know what you guys think?

It is fine. As long as there is way for container applications to use local(127.0.0.1) resolver on the host, it is good. But as @rhatdan said, it seems like a lot of work, whereas iptables(8) DNAT is trivial.

But it doesn't have to be a lot more work and we can start with something simple which just gets
the job done initially (before adding any bells and whistles interms of apis and options) and all of
them can be contained within the libnetwork code base.

Yes, this sounds better. Let's start with something minimal that users can use right away and then improvise upon it going forward. F23 feature has been accepted by Fedora and is due to be released later this year, with alpha release by mid of Aug 2015 .

It'll really help if libnetwork minimal solution could be made available before we hit the alpha release. We plan to host Fedora test days in the coming days for the DNSSEC feature, it'll be great if we could test the Docker/libnetwork solution too.

Thank you.

pjps commented Jul 21, 2015

Hello @mrjana

IMO it will break a bunch of applications which might be relying
on the internal lo connectivity(I already know a few use cases who use that).

Could you please elaborate on this breakage please? How does it happen?? (just curious)

This provides a much cleaner solution without breaking the inside-container loopback connectivity.
Also this augurs well with our future libnetwork plan to provide a framework for external DNS plugin
api so that any external entity can act as this DNS proxy in the form of an external DNS plugin.
In fact the Local DNS resolver itself can incorporate the api to become that DNS plugin.
Let me know what you guys think?

It is fine. As long as there is way for container applications to use local(127.0.0.1) resolver on the host, it is good. But as @rhatdan said, it seems like a lot of work, whereas iptables(8) DNAT is trivial.

But it doesn't have to be a lot more work and we can start with something simple which just gets
the job done initially (before adding any bells and whistles interms of apis and options) and all of
them can be contained within the libnetwork code base.

Yes, this sounds better. Let's start with something minimal that users can use right away and then improvise upon it going forward. F23 feature has been accepted by Fedora and is due to be released later this year, with alpha release by mid of Aug 2015 .

It'll really help if libnetwork minimal solution could be made available before we hit the alpha release. We plan to host Fedora test days in the coming days for the DNSSEC feature, it'll be great if we could test the Docker/libnetwork solution too.

Thank you.

@pjps

This comment has been minimized.

Show comment
Hide comment
@pjps

pjps Jul 27, 2015

Hello @mrjana, @mavenugo,

Did you have chance to look into the minimal DNS proxy solution? (just checking)

pjps commented Jul 27, 2015

Hello @mrjana, @mavenugo,

Did you have chance to look into the minimal DNS proxy solution? (just checking)

@pjps

This comment has been minimized.

Show comment
Hide comment

pjps commented Jul 31, 2015

@mrjana @mavenugo ...ping!?!

@mavenugo

This comment has been minimized.

Show comment
Hide comment
@mavenugo

mavenugo Jul 31, 2015

Contributor

@pjps

Did you have chance to look into the minimal DNS proxy solution? (just checking)

Not yet. As @mrjana explained this seems to aligns with Libnetwork's planned DNS Plugin work. We havent started on it.

Contributor

mavenugo commented Jul 31, 2015

@pjps

Did you have chance to look into the minimal DNS proxy solution? (just checking)

Not yet. As @mrjana explained this seems to aligns with Libnetwork's planned DNS Plugin work. We havent started on it.

@pjps

This comment has been minimized.

Show comment
Hide comment
@pjps

pjps Aug 7, 2015

@mavenugo Thank you for an update. As said earlier, it seems that the proposed solution of DNS proxy and Plugin work would take much longer. Till then, it'll help to have a minimal workaround, maybe iptables(8) DNAT or minimal proxy daemon, for users to take advantage of the DNSSEC resolver on the host.

Would it be possible to make such a solution available for the short term?

pjps commented Aug 7, 2015

@mavenugo Thank you for an update. As said earlier, it seems that the proposed solution of DNS proxy and Plugin work would take much longer. Till then, it'll help to have a minimal workaround, maybe iptables(8) DNAT or minimal proxy daemon, for users to take advantage of the DNSSEC resolver on the host.

Would it be possible to make such a solution available for the short term?

@davidbirdsong

This comment has been minimized.

Show comment
Hide comment
@davidbirdsong

davidbirdsong Oct 6, 2015

I plan to run --net=host, but this still leaves me dead-in-the-water when it comes to builds. I'm not able to build an image without erecting a temporary DNS server listening on something other than 127.0.0.1.

I plan to run --net=host, but this still leaves me dead-in-the-water when it comes to builds. I'm not able to build an image without erecting a temporary DNS server listening on something other than 127.0.0.1.

@pjps

This comment has been minimized.

Show comment
Hide comment
@pjps

pjps Nov 18, 2015

Hello @mrjana @mavenugo

It's been quite long, just wanted to check if there has been any development on the proposed DNS proxy solution in the Docker container?

Thank you.

pjps commented Nov 18, 2015

Hello @mrjana @mavenugo

It's been quite long, just wanted to check if there has been any development on the proposed DNS proxy solution in the Docker container?

Thank you.

@mavenugo

This comment has been minimized.

Show comment
Hide comment
@mavenugo

mavenugo Nov 18, 2015

Contributor

@pjps please refer to https://groups.google.com/forum/#!topic/docker-dev/WXkMiPJqh7I. This is exactly being discussed now. @phemmer @sanimej & others are discussing on the best approach & I believe there will be a proposal with more details shortly. Stay tuned.

Contributor

mavenugo commented Nov 18, 2015

@pjps please refer to https://groups.google.com/forum/#!topic/docker-dev/WXkMiPJqh7I. This is exactly being discussed now. @phemmer @sanimej & others are discussing on the best approach & I believe there will be a proposal with more details shortly. Stay tuned.

@pjps

This comment has been minimized.

Show comment
Hide comment
@pjps

pjps Nov 18, 2015

@mavenugo: Ah, Great! Thank you so much for sharing the link, I appreciate it.

pjps commented Nov 18, 2015

@mavenugo: Ah, Great! Thank you so much for sharing the link, I appreciate it.

@thozza

This comment has been minimized.

Show comment
Hide comment
@LK4D4

This comment has been minimized.

Show comment
Hide comment
@LK4D4

LK4D4 Sep 14, 2016

Contributor

I wonder if we've resolved this. Going to close it to make issues list cleaner. Feel free to open new issue, it will have more attention.

Contributor

LK4D4 commented Sep 14, 2016

I wonder if we've resolved this. Going to close it to make issues list cleaner. Feel free to open new issue, it will have more attention.

@LK4D4 LK4D4 closed this Sep 14, 2016

@candrews

This comment has been minimized.

Show comment
Hide comment
@candrews

candrews Sep 14, 2016

@LK4D4 I'm confused - so did you just close the issue, knowing that it's not fixed, simply to "make the issue list cleaner" ?

@LK4D4 I'm confused - so did you just close the issue, knowing that it's not fixed, simply to "make the issue list cleaner" ?

@pjps

This comment has been minimized.

Show comment
Hide comment
@pjps

pjps Sep 14, 2016

|I wonder if we've resolved this. Going to close it to make issues list cleaner.

@LK4D4 Please don't close it if it's not fixed. I'll try and check if it's been fixed and will close it once done.

pjps commented Sep 14, 2016

|I wonder if we've resolved this. Going to close it to make issues list cleaner.

@LK4D4 Please don't close it if it's not fixed. I'll try and check if it's been fixed and will close it once done.

@LK4D4

This comment has been minimized.

Show comment
Hide comment
@LK4D4

LK4D4 Sep 14, 2016

Contributor

@pjps @candrews I'm not evil closer, guys. I'm trying to really make issues doing something. There are 2600 open issues, chance that someone related will find this issue is close to zero. A new issue will have much more attention.
I personally don't know if it's fixed or not. At least we have some attention now.

Contributor

LK4D4 commented Sep 14, 2016

@pjps @candrews I'm not evil closer, guys. I'm trying to really make issues doing something. There are 2600 open issues, chance that someone related will find this issue is close to zero. A new issue will have much more attention.
I personally don't know if it's fixed or not. At least we have some attention now.

@pjps

This comment has been minimized.

Show comment
Hide comment
@pjps

pjps Sep 14, 2016

|I personally don't know if it's fixed or not. At least we have some attention now.

@LK4D4: I'll try and confirm it through the weekend. At least let it be open till then.

pjps commented Sep 14, 2016

|I personally don't know if it's fixed or not. At least we have some attention now.

@LK4D4: I'll try and confirm it through the weekend. At least let it be open till then.

@LK4D4

This comment has been minimized.

Show comment
Hide comment
@LK4D4

LK4D4 Sep 14, 2016

Contributor

@pjps No problem, but I suggest to open new anyway later :)

Contributor

LK4D4 commented Sep 14, 2016

@pjps No problem, but I suggest to open new anyway later :)

@LK4D4 LK4D4 reopened this Sep 14, 2016

petrosagg added a commit to resin-os/meta-resin that referenced this issue Oct 9, 2016

docker: enable using local resolver in containers
iptables and sysctl adjusted according to moby/moby#14627

Signed-off-by: Petros Angelatos <petrosagg@gmail.com>

petrosagg added a commit to resin-os/meta-resin that referenced this issue Nov 2, 2016

docker: enable using local resolver in containers
iptables and sysctl adjusted according to moby/moby#14627

Signed-off-by: Petros Angelatos <petrosagg@gmail.com>

petrosagg added a commit to resin-os/meta-resin that referenced this issue Nov 14, 2016

docker: enable using local resolver in containers
iptables and sysctl adjusted according to moby/moby#14627

Signed-off-by: Petros Angelatos <petrosagg@gmail.com>

agherzan added a commit to resin-os/meta-resin that referenced this issue Nov 14, 2016

docker: enable using local resolver in containers
iptables and sysctl adjusted according to moby/moby#14627

Signed-off-by: Petros Angelatos <petrosagg@gmail.com>

@thaJeztah thaJeztah closed this in #29891 Jan 11, 2017

@thaJeztah thaJeztah added this to the 1.14.0 milestone Jan 11, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment