Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Need options to disable embedded DNS #1085

Open
hustcat opened this issue Apr 7, 2016 · 19 comments
Open

Proposal: Need options to disable embedded DNS #1085

hustcat opened this issue Apr 7, 2016 · 19 comments

Comments

@hustcat
Copy link

hustcat commented Apr 7, 2016

I want to use default bridge and macvlan network in k8s. However, when container connect to macvlan network, DNS of default bridge network will be changed by macvlan network:

Start container with default bridge network:

[root@kube-node1 ~]# docker ps
CONTAINER ID        IMAGE                                COMMAND               CREATED             STATUS              PORTS               NAMES
e9bed187828f        sshd:1.0                             "/usr/sbin/sshd -D"   11 minutes ago      Up 11 minutes                           k8s_sshd-1.aea60a3a_sshd-1_default_33a61753-fc72-11e5-9520-525460110101_85c3614c
127973c47c43        gcr.io/google_containers/pause:2.0   "/pause"              11 minutes ago      Up 11 minutes                           k8s_POD.6059dfa2_sshd-1_default_33a61753-fc72-11e5-9520-525460110101_94dbf418

[root@kube-node1 ~]# docker exec e9bed187828f cat /etc/resolv.conf 
search default.svc.cluster.local svc.cluster.local cluster.local default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.254.0.10
options ndots:5
options ndots:5
[root@kube-node1 ~]# docker exec e9bed187828f nslookup kubernetes.default
Server:         10.254.0.10
Address:        10.254.0.10#53

Name:   kubernetes.default.svc.cluster.local
Address: 10.254.0.1

Connect container to macvlan network:

[root@kube-node1 ~]# docker network create -d macvlan --subnet=10.10.10.0/24 --gateway=10.10.10.1 -o parent=eth0 pub_net
056f952e74668afcce1f9f2d9543e847f562da0d044862775b0e660c85b9f744

[root@kube-node1 ~]# docker network connect --ip="10.10.10.100" pub_net 127973c47c43

/etc/resolv.conf will be changed:

[root@kube-node1 ~]# docker exec e9bed187828f cat /etc/resolv.conf 
search default.svc.cluster.local svc.cluster.local cluster.local default.svc.cluster.local svc.cluster.local cluster.local
nameserver 127.0.0.11
options ndots:5 ndots:0
[root@kube-node1 ~]# docker exec e9bed187828f nslookup kubernetes.default                                               
;; connection timed out; trying next origin
Server:         127.0.0.11
Address:        127.0.0.11#53

Name:   kubernetes.default.svc.cluster.local
Address: 10.254.0.1

This will make every DNS query to 127.0.0.11 at first, and connection timeout, then to 10.254.0.10. Make DNS query to be slowly.

;; connection timed out; trying next origin

@mrjana @mavenugo @thockin @brendandburns

Refer to #19474

@mavenugo
Copy link
Contributor

mavenugo commented Apr 7, 2016

@hustcat can you share the docker version details ?
I couldn't quite understand the reason for connection timeout. cc @sanimej

@hustcat
Copy link
Author

hustcat commented Apr 7, 2016

1.11.0-dev with experiment:

#docker --version
Docker version 1.11.0-dev, build 901c67a-unsupported, experimental

@sanimej
Copy link

sanimej commented Apr 7, 2016

@mavenugo timeout is because the name can't be resolved in the docker domain.

@hustcat Normally what we recommend is to pass external DNS servers through the --dns option in the docker run. Embedded DNS server will forward the queries that it can't resolve to the configured servers.

@hustcat
Copy link
Author

hustcat commented Apr 7, 2016

@sanimej yea,10.254.0.10 is the external DNS, and embedded DNS server has forwarded the query. But this make DNS query become inefficiency and slowly. I think disable it is better for me.
And more, I don't want port that nothing to do with the application is listened in container, and this will confuse application developer.

[root@kube-node1 ~]# docker exec e9bed187828f netstat -lnp 
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name   
tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN      -                   
tcp        0      0 127.0.0.11:45723            0.0.0.0:*                   LISTEN      -                   
tcp        0      0 :::22                       :::*                        LISTEN      -                   
udp        0      0 127.0.0.11:42323            0.0.0.0:*                               - 

and iptables rules:

[root@kube-node1 ~]# docker exec --privileged e9bed187828f iptables -t nat -nvL         
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 3 packets, 210 bytes)
 pkts bytes target     prot opt in     out     source               destination         
   24  1732 DNAT       udp  --  *      *       0.0.0.0/0            127.0.0.11          udp dpt:53 to:127.0.0.11:42323 
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            127.0.0.11          tcp dpt:53 to:127.0.0.11:45723 

Chain POSTROUTING (policy ACCEPT 27 packets, 1942 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 SNAT       udp  --  *      *       127.0.0.11           0.0.0.0/0           udp spt:42323 to::53 
    0     0 SNAT       tcp  --  *      *       127.0.0.11           0.0.0.0/0           tcp spt:45723 to::53 

I wish container is clean.

@vavrusa
Copy link

vavrusa commented Jun 16, 2016

I'm in favour of this. I'm running DNS authoritatives and resolvers inside containers, and I'm sure many others do. This prevents me doing that instead of resorting to port redirections. 53 shouldn't be treated any differently from 22, 80 or 443, or at least there should be an option to disable this.

@brat002
Copy link

brat002 commented Jan 5, 2017

Builtin DNS server falls under load. Disable it or make it reliable.

@berlic
Copy link

berlic commented Feb 7, 2017

Embedded DNS loose part of upstream responses giving DNS timeouts for client apps inside container.
I can't find 100% reproducible sequence, but this happens quite often on one of our hosts (Server Version: 1.13.0, Kernel Version: 4.4.0-47-generic, Operating System: Ubuntu 16.04.1 LTS).

This happens only for containers in custom network, when Docker uses 127.0.0.11 resolver.

I start while true; do date; ping -w 1 s3.eu-central-1.amazonaws.com; echo sleep; sleep 1; done; in containers and can see periods of ping: unknown host – this happens to all containers simultaneously, after a minute or so dns responses start to arrive again.

During this strange periods I can see outgoing udp packets with DNS requests with tcpdump inside the container, responses from upstream with tcpdump on upstream, but no udp packets with DNS responses inside container!

If I replace 127.0.0.11 resolver with upstream IP-address, everything works fine.

@lierdakil
Copy link

FWIW, I'm running into issues with embedded DNS on a host that uses nftables instead of iptables (iptables are disabled because that conflicts with nftables' dnat). It just doesn't work, plain and simple, for obvious reasons. While being able to disable eDNS won't solve service discovery problem, I could work around that.

@Zenithar
Copy link

Any news ? I have the same problem with nftable too.

@Gunni
Copy link

Gunni commented Jan 5, 2018

I want to use my own DNS servers, no inside-docker name doohickey wanted.

Just old simple standardized DNS that isn't being messed with.

For now my containers have to run a startup script of
echo nameserver <my actual nameserver> > /etc/resolv.conf
because of this.

@brat002
Copy link

brat002 commented Jan 5, 2018

We use own patch to do that

diff --git a/components/engine/container/container.go b/components/engine/container/container.go
index 11814b7..1206e7b 100644
--- a/components/engine/container/container.go
+++ b/components/engine/container/container.go
@@ -793,9 +793,12 @@ func (container *Container) BuildCreateEndpointOptions(n libnetwork.Network, epC
                createOptions = append(createOptions, libnetwork.CreateOptionService(svcCfg.Name, svcCfg.ID, net.ParseIP(vip), portConfigs, svcCfg.Aliases[n.ID()]))
        }

-       if !containertypes.NetworkMode(n.Name()).IsUserDefined() {
-               createOptions = append(createOptions, libnetwork.CreateOptionDisableResolution())
-       }
+       // if !containertypes.NetworkMode(n.Name()).IsUserDefined() {
+       //      createOptions = append(createOptions, libnetwork.CreateOptionDisableResolution())
+       // }
+
+        // Always disable embedded dns server
+        createOptions = append(createOptions, libnetwork.CreateOptionDisableResolution())

@smorgrav
Copy link

smorgrav commented Apr 16, 2018

I second this. My beef is that I want full control of the nat table for the container. The assumptions I have to make to allow docker to set the necessary nat rules seems unnecessary.

@jktr
Copy link

jktr commented Jun 24, 2018

My use case for this is using the macvlan network driver together with nftables.

Even though I've set "iptables": false eDNS still tries (unsuccessfully) to set up NAT rules and then proceeds to mangle /etc/resolv.conf, breaking all container DNS resolution in the process, and forcing me to run a patched docker with eDNS disabled.

While a switch to disable eDNS would be best, I'd at least like to see eDNS disabled if NATting is impossible. Disabling eDNS when "iptables":false is set would be a good start (until nftables is fully supported).

@totoCZ
Copy link

totoCZ commented Sep 8, 2018

I had the same issue with nftables. Traefik would fail with

dial tcp: lookup acme-v02.api.letsencrypt.org on 127.0.0.11:53: read udp 127.0.0.1:45118->127.0.0.11:53: read: connection refused"

Obviously there's no such thing as 127.0.0.11 on my system - I use static IPs with static src/dstnat rules.
The solution that also works with static IPs is very simple and doesn't need recompiling or changing dockerfiles.
Put this in docker-compose:

  volumes:
     - /opt/data/resolv.conf:/etc/resolv.conf:ro

where /opt/data/resolv.conf has the correct DNS servers (8.8.8.8) :-)
Now everything runs as it should.

@growler
Copy link

growler commented Sep 20, 2018

Same problem here (openvswitch network driver, nftables with iptables disabled in modprobe config, own resolver/discovery service). The patch above helps, but I still don't understand why not make it optional the same way as DisableGatewayService works.

@mpalmer
Copy link

mpalmer commented Apr 10, 2019

Add me to the list of people who really need to be able to disable the embedded DNS server. In my case, it's because it doesn't handle PTR queries correctly.

@cakyus
Copy link

cakyus commented Jun 24, 2019

In my case, ssh to remote docker container is so slow , we still need sshd inside this docker because of the legacy application depends heavily on ssh command. You could only use /etc/host and disabled domain name resolving via dns in /etc/nsswitch.conf , change it from hosts: files dns to hosts: files . it will ignore /etc/resolv.conf

sed -i 's/^hosts:.*/hosts: files/' /etc/nsswitch.conf

@black-ish
Copy link

Found this and for me that is still an issue.

@juanluisvaladas
Copy link

This is also affecting me...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests