New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't access internet from containers #13381

Closed
cfpeng opened this Issue May 21, 2015 · 75 comments

Comments

Projects
None yet
@cfpeng

cfpeng commented May 21, 2015

When I ping google.com in the container, it return : ping: unknown host

[HOST Info]
root@host# uname -a
Linux localhost 4.0.2-x86_64-linode56 #1 SMP Mon May 11 16:55:19 EDT 2015 x86_64 GNU/Linux

root@host# docker version
Client version: 1.6.2
Client API version: 1.18
Go version (client): go1.4.2
Git commit (client): 7c8fca2
OS/Arch (client): linux/amd64
Server version: 1.6.2
Server API version: 1.18
Go version (server): go1.4.2
Git commit (server): 7c8fca2
OS/Arch (server): linux/amd64

start the container
root@host#docker run --rm -it debian /bin/bash

start capture package
root@host# tshark -i eth0 -i docker0
1 0.000000 106.186.. -> 8.8.8.8 DNS 70 Standard query 0xb49a A google.com
2 0.046688 8.8.8.8 -> 106.186.. DNS 86 Standard query response 0xb49a A 216.58.221.14
3 -0.000042 172.17.0.4 -> 8.8.8.8 DNS 70 Standard query 0xb49a A google.com
4 4.171017 fe80::1 -> ff02::1 ICMPv6 118 Router Advertisement from 00:05:73:a0:0f:ff
5 5.005167 106.186.. -> 8.8.8.8 DNS 70 Standard query 0xb49a A google.com
6 5.007502 8.8.8.8 -> 106.186.. DNS 86 Standard query response 0xb49a A 216.58.221.14
7 5.005127 172.17.0.4 -> 8.8.8.8 DNS 70 Standard query 0xb49a A google.com
8 5.016512 02:42:ac:11:00:04 -> ca:5b:7d:34:78:20 ARP 42 Who has 172.17.42.1? Tell 172.17.0.4
9 5.016542 ca:5b:7d:34:78:20 -> 02:42:ac:11:00:04 ARP 42 172.17.42.1 is at ca:5b:7d:34:78:20
10 10.010414 106.186.. -> 8.8.8.8 DNS 70 Standard query 0x1367 A google.com
11 10.046683 8.8.8.8 -> 106.186.. DNS 86 Standard query response 0x1367 A 216.58.221.14
12 10.010374 172.17.0.4 -> 8.8.8.8 DNS 70 Standard query 0x1367 A google.com
13 15.015578 106.186.. -> 8.8.8.8 DNS 70 Standard query 0x1367 A google.com
14 15.052782 8.8.8.8 -> 106.186.. DNS 246 Standard query response 0x1367 A 173.194.126.198 A 173.194.126.196 A 173.194.126.197 A 173.194.126.194 A 173.194.126.195 A 173.194.126.193 A 173.194.126.206 A 173.194.126.199 A 173.194.126.200 A 173.194.126.192 A 173.194.126.201
15 15.015538 172.17.0.4 -> 8.8.8.8 DNS 70 Standard query 0x1367 A google.com

root@f82d47432161:/# ip addr
eth0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:04 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.4/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe11:4/64 scope link
valid_lft forever preferred_lft forever

root@f82d47432161:/# ping google.com
ping: unknown host

It seems that the host did not forword package to the container

@runcom

This comment has been minimized.

Show comment
Hide comment
@runcom

runcom May 23, 2015

Member

@VirtualSniper do you have net.ipv4.ip_forward on?

Member

runcom commented May 23, 2015

@VirtualSniper do you have net.ipv4.ip_forward on?

@cfpeng

This comment has been minimized.

Show comment
Hide comment
@cfpeng

cfpeng commented May 25, 2015

yes

@aboch

This comment has been minimized.

Show comment
Hide comment
@aboch

aboch Jun 2, 2015

Contributor

@VirtualSniper are you able to reproduce this ?
I tried with no luck.
If you can reproduce, can you please capture what is going on on the vethxxx interface. Thanks.

Contributor

aboch commented Jun 2, 2015

@VirtualSniper are you able to reproduce this ?
I tried with no luck.
If you can reproduce, can you please capture what is going on on the vethxxx interface. Thanks.

@cfpeng

This comment has been minimized.

Show comment
Hide comment
@cfpeng

cfpeng Jun 3, 2015

@aboch yes, here is :
root@host# tshark -i vetheee49f3
1 0.000000 172.17.0.3 -> 8.8.8.8 DNS 70 Standard query 0x4b9f A google.com
2 5.005164 172.17.0.3 -> 8.8.8.8 DNS 70 Standard query 0x4b9f A google.com
3 5.007449 02:42:ac:11:00:03 -> 52:06:ed:10:28:2a ARP 42 Who has 172.17.42.1? Tell 172.17.0.3
4 5.007462 52:06:ed:10:28:2a -> 02:42:ac:11:00:03 ARP 42 172.17.42.1 is at 52:06:ed:10:28:2a
5 10.010424 172.17.0.3 -> 8.8.8.8 DNS 70 Standard query 0x8327 A google.com
6 15.015621 172.17.0.3 -> 8.8.8.8 DNS 70 Standard query 0x8327 A google.com

cfpeng commented Jun 3, 2015

@aboch yes, here is :
root@host# tshark -i vetheee49f3
1 0.000000 172.17.0.3 -> 8.8.8.8 DNS 70 Standard query 0x4b9f A google.com
2 5.005164 172.17.0.3 -> 8.8.8.8 DNS 70 Standard query 0x4b9f A google.com
3 5.007449 02:42:ac:11:00:03 -> 52:06:ed:10:28:2a ARP 42 Who has 172.17.42.1? Tell 172.17.0.3
4 5.007462 52:06:ed:10:28:2a -> 02:42:ac:11:00:03 ARP 42 172.17.42.1 is at 52:06:ed:10:28:2a
5 10.010424 172.17.0.3 -> 8.8.8.8 DNS 70 Standard query 0x8327 A google.com
6 15.015621 172.17.0.3 -> 8.8.8.8 DNS 70 Standard query 0x8327 A google.com

@clnperez

This comment has been minimized.

Show comment
Hide comment
@clnperez

clnperez Jun 15, 2015

Contributor

I was also running into this. I was able to work around it by deactivating & deleting docker0 (after stopping docker), and then starting docker again (which re-creates docker0).

I do have NetworkManager running, since this is on my laptop. and a VPN running at the moment. But this was also happening while I was in the office on Friday (sans VPN in use).

I wouldn't be surprised is this is a known issue, and there are other open issues for it. Anyone know who to tag from the Docker networking side to find out more?

Contributor

clnperez commented Jun 15, 2015

I was also running into this. I was able to work around it by deactivating & deleting docker0 (after stopping docker), and then starting docker again (which re-creates docker0).

I do have NetworkManager running, since this is on my laptop. and a VPN running at the moment. But this was also happening while I was in the office on Friday (sans VPN in use).

I wouldn't be surprised is this is a known issue, and there are other open issues for it. Anyone know who to tag from the Docker networking side to find out more?

@bear0330

This comment has been minimized.

Show comment
Hide comment
@bear0330

bear0330 Jul 22, 2015

I have the same problem but it not always happens. First, everything works fine, then after few days passed, or I build some images, start/stop containers. Sometimes inside the container cannot connect to anything anymore. All my running containers are the same, lost internet connection, for example curl to github (via ip or domain) will fail:

[root@2b7308d /]# curl http://192.30.252.129
curl: (7) Failed connect to 192.30.252.129:80; No route to host

The only way I can solve this is restart docker daemon, then everything will be back to work.
But it bothers me a lot, all my apps and services in container will be down but I even don't know it until I got error to do something.

Any suggestions for this? Thanks.

bear0330 commented Jul 22, 2015

I have the same problem but it not always happens. First, everything works fine, then after few days passed, or I build some images, start/stop containers. Sometimes inside the container cannot connect to anything anymore. All my running containers are the same, lost internet connection, for example curl to github (via ip or domain) will fail:

[root@2b7308d /]# curl http://192.30.252.129
curl: (7) Failed connect to 192.30.252.129:80; No route to host

The only way I can solve this is restart docker daemon, then everything will be back to work.
But it bothers me a lot, all my apps and services in container will be down but I even don't know it until I got error to do something.

Any suggestions for this? Thanks.

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah
Member

thaJeztah commented Aug 9, 2015

ping @aboch

@aboch

This comment has been minimized.

Show comment
Hide comment
@aboch

aboch Aug 10, 2015

Contributor

There was a bug in bridge driver code where the linux bridge interface MAC address would not be programmed as "SET" in 4.x (x < 3) kernels. Bug is there in docker 1.6.2.

@cfpeng I see your host is running a 4.0.2, so it would be affected.
Issue has been recently fixed here and made it to docker/docker code via this so it will be in docker 1.8.0

@bear0330 can you check whether your kernel is a 4.x as well ? This would explain why you hit the issue after a while, maybe after spawning a new container.

@cfpeng @bear0330 could you please check whether you are still hitting this issue with latest 1.8.0-rcX image

Contributor

aboch commented Aug 10, 2015

There was a bug in bridge driver code where the linux bridge interface MAC address would not be programmed as "SET" in 4.x (x < 3) kernels. Bug is there in docker 1.6.2.

@cfpeng I see your host is running a 4.0.2, so it would be affected.
Issue has been recently fixed here and made it to docker/docker code via this so it will be in docker 1.8.0

@bear0330 can you check whether your kernel is a 4.x as well ? This would explain why you hit the issue after a while, maybe after spawning a new container.

@cfpeng @bear0330 could you please check whether you are still hitting this issue with latest 1.8.0-rcX image

@bear0330

This comment has been minimized.

Show comment
Hide comment
@bear0330

bear0330 Aug 11, 2015

@aboch My kernel is 3.10.0-229.7.2.el7.x86_64, I am running docker on Azure, I am not sure this is because Azure's issue (I have no idea), I am trying to run docker on Vultr.

bear0330 commented Aug 11, 2015

@aboch My kernel is 3.10.0-229.7.2.el7.x86_64, I am running docker on Azure, I am not sure this is because Azure's issue (I have no idea), I am trying to run docker on Vultr.

@dverbeek84

This comment has been minimized.

Show comment
Hide comment
@dverbeek84

dverbeek84 Aug 13, 2015

@aboch same issue here with the same kernel as @bear0330

dverbeek84 commented Aug 13, 2015

@aboch same issue here with the same kernel as @bear0330

@fkeet

This comment has been minimized.

Show comment
Hide comment
@fkeet

fkeet Aug 14, 2015

Similar issue. Removing the bridge (and relevant cleanup) did not have an effect.

$ uname -a
Linux hostname 3.19.0-25-generic #26-Ubuntu SMP Fri Jul 24 21:17:31 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

$ docker version
Client:
Version: 1.8.1
API version: 1.20
Go version: go1.4.2
Git commit: d12ea79
Built: Thu Aug 13 02:40:42 UTC 2015
OS/Arch: linux/amd64

Server:
Version: 1.8.1
API version: 1.20
Go version: go1.4.2
Git commit: d12ea79
Built: Thu Aug 13 02:40:42 UTC 2015
OS/Arch: linux/amd64

$ docker run -ti ubuntu /bin/bash
$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=56 time=0.967 ms
....

$ ping www.google.com
ping: unknown host www.google.com

fkeet commented Aug 14, 2015

Similar issue. Removing the bridge (and relevant cleanup) did not have an effect.

$ uname -a
Linux hostname 3.19.0-25-generic #26-Ubuntu SMP Fri Jul 24 21:17:31 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

$ docker version
Client:
Version: 1.8.1
API version: 1.20
Go version: go1.4.2
Git commit: d12ea79
Built: Thu Aug 13 02:40:42 UTC 2015
OS/Arch: linux/amd64

Server:
Version: 1.8.1
API version: 1.20
Go version: go1.4.2
Git commit: d12ea79
Built: Thu Aug 13 02:40:42 UTC 2015
OS/Arch: linux/amd64

$ docker run -ti ubuntu /bin/bash
$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=56 time=0.967 ms
....

$ ping www.google.com
ping: unknown host www.google.com

@aboch

This comment has been minimized.

Show comment
Hide comment
@aboch

aboch Aug 15, 2015

Contributor

@bear0330 Your issue (from what I can see from your logs) is different from the one hit by @cfpeng and @fkeet. Theirs is related to DNS response packets not being delivered to the container, yours is related to ip reachability: you would get "no route to host" if ,for example, in your container the default gw IP is unset or it does not belong to the same network of eth0.

Contributor

aboch commented Aug 15, 2015

@bear0330 Your issue (from what I can see from your logs) is different from the one hit by @cfpeng and @fkeet. Theirs is related to DNS response packets not being delivered to the container, yours is related to ip reachability: you would get "no route to host" if ,for example, in your container the default gw IP is unset or it does not belong to the same network of eth0.

@aboch

This comment has been minimized.

Show comment
Hide comment
@aboch

aboch Aug 15, 2015

Contributor

@cfpeng Given DNS requests are routed from docker0 to eth0 but responses are not, it makes me think it has to do with iptables.
If you have not done so already, could you please run the check-config.sh that you find in docker/contrib/ to see if any required iptable component is missing.

@fkeet Can you also try the same.

Thanks.

Contributor

aboch commented Aug 15, 2015

@cfpeng Given DNS requests are routed from docker0 to eth0 but responses are not, it makes me think it has to do with iptables.
If you have not done so already, could you please run the check-config.sh that you find in docker/contrib/ to see if any required iptable component is missing.

@fkeet Can you also try the same.

Thanks.

@bear0330

This comment has been minimized.

Show comment
Hide comment
@bear0330

bear0330 Aug 15, 2015

@aboch I got the same issue after running docker on Vultr's machine for few days. Now my container cannot connect to internet again. Now in container (My container's hostname is status.xxx.com):

[root@status /]# curl http://www.google.com/
curl: (6) Could not resolve host: www.google.com; Unknown error
[root@status /]# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.

(hang, I press Ctrl+C to break)

[root@status /]# curl http://192.30.252.129
curl: (7) Failed connect to 192.30.252.129:80; No route to host

run docker run -ti ubuntu /bin/bash on host:

[root@mercury Redis]# docker run -ti ubuntu /bin/bash
Unable to find image 'ubuntu:latest' locally
Trying to pull repository docker.xxx.com/ubuntu ... failed
Trying to pull repository docker-protected.xxx.com/ubuntu ... failed
latest: Pulling from docker.io/ubuntu
6071b4945dcf: Pulling fs layer
6071b4945dcf: Download complete
5bff21ba5409: Download complete
e5855facec0b: Download complete
8251da35e7a7: Download complete
Status: Downloaded newer image for docker.io/ubuntu:latest
root@63840a13cad5:/# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.

(hang..., also Ctrl+C)

[root@mercury Redis]# docker version
Client version: 1.6.2
Client API version: 1.18
Go version (client): go1.4.2
Git commit (client): ba1f6c3/1.6.2
OS/Arch (client): linux/amd64
Server version: 1.6.2
Server API version: 1.18
Go version (server): go1.4.2
Git commit (server): ba1f6c3/1.6.2
OS/Arch (server): linux/amd64
[root@mercury Redis]# docker info
Containers: 18
Images: 142
Storage Driver: devicemapper
 Pool Name: docker-253:1-304602-pool
 Pool Blocksize: 65.54 kB
 Backing Filesystem: extfs
 Data file:
 Metadata file:
 Data Space Used: 3.558 GB
 Data Space Total: 107.4 GB
 Data Space Available: 103.8 GB
 Metadata Space Used: 7.365 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.14 GB
 Udev Sync Supported: true
 Library Version: 1.02.93-RHEL7 (2015-01-28)
Execution Driver: native-0.2
Kernel Version: 3.10.0-229.11.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
CPUs: 2
Total Memory: 1.797 GiB
Name: mercury.xxx.com
ID: V5PO:Z7CC:LTPM:NICT:I2G5:6B6K:AVTP:IOH5:6GNW:JLOK:VCUF:MSY

There is some log messages in my /var/log/messages, I don't know it is related or not:

Aug 15 11:20:02 mercury systemd: Starting Session 5445 of user root.
Aug 15 11:20:02 mercury systemd: Started Session 5445 of user root.
Aug 15 11:20:09 mercury docker: time="2015-08-15T11:20:09Z" level=info msg="Container 6067b64e5538d86adc350b6f61cccf1ce2327d4e781a835e642c8e36d8503534 failed to exit within 10 seconds of SIGTERM - using the force"
Aug 15 11:20:09 mercury kernel: docker0: port 1(veth6388a9a) entered disabled state
Aug 15 11:20:09 mercury kernel: device veth6388a9a left promiscuous mode
Aug 15 11:20:09 mercury kernel: docker0: port 1(veth6388a9a) entered disabled state
Aug 15 11:20:09 mercury docker: time="2015-08-15T11:20:09Z" level=info msg="+job log(die, 6067b64e5538d86adc350b6f61cccf1ce2327d4e781a835e642c8e36d8503534, docker.xxx.com/service/redis:2.8.19-latest)"
Aug 15 11:20:09 mercury docker: time="2015-08-15T11:20:09Z" level=info msg="-job log(die, 6067b64e5538d86adc350b6f61cccf1ce2327d4e781a835e642c8e36d8503534, docker.xxx.com/service/redis:2.8.19-latest) = OK (0)"
Aug 15 11:20:09 mercury docker: time="2015-08-15T11:20:09Z" level=info msg="+job release_interface(6067b64e5538d86adc350b6f61cccf1ce2327d4e781a835e642c8e36d8503534)"
....
Aug 15 11:20:09 mercury docker: time="2015-08-15T11:20:09Z" level=info msg="+job container_inspect(redis-2.8.19)"
Aug 15 11:20:09 mercury NetworkManager[558]: <info>  (veth6388a9a): device state change: activated -> unmanaged (reason 'removed') [100 10 36]
Aug 15 11:20:09 mercury NetworkManager[558]: <info>  (veth6388a9a): deactivating device (reason 'removed') [36]
Aug 15 11:20:09 mercury docker: time="2015-08-15T11:20:09Z" level=info msg="-job release_interface(6067b64e5538d86adc350b6f61cccf1ce2327d4e781a835e642c8e36d8503534) = OK (0)"
Aug 15 11:20:09 mercury NetworkManager[558]: <warn>  (docker0): failed to detach bridge port veth6388a9a
Aug 15 11:20:09 mercury dbus-daemon: dbus[470]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service'
Aug 15 11:20:09 mercury dbus[470]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service'
Aug 15 11:20:09 mercury systemd: Starting Network Manager Script Dispatcher Service...
Aug 15 11:20:09 mercury dbus-daemon: dbus[470]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher'
Aug 15 11:20:09 mercury systemd: Started Network Manager Script Dispatcher Service.
Aug 15 11:20:09 mercury nm-dispatcher: Dispatching action 'down' for veth6388a9a
Aug 15 11:20:09 mercury docker: time="2015-08-15T11:20:09Z" level=info msg="+job log(stop, 6067b64e5538d86adc350b6f61cccf1ce2327d4e781a835e642c8e36d8503534, docker.xxx.com/service/redis:2.8.19-latest)"
Aug 15 11:20:09 mercury docker: time="2015-08-15T11:20:09Z" level=info msg="-job log(stop, 6067b64e5538d86adc350b6f61cccf1ce2327d4e781a835e642c8e36d8503534, docker.xxx.com/service/redis:2.8.19-latest) = OK (0)"
...

If you need any further information, please tell me I would provide it if I can.

bear0330 commented Aug 15, 2015

@aboch I got the same issue after running docker on Vultr's machine for few days. Now my container cannot connect to internet again. Now in container (My container's hostname is status.xxx.com):

[root@status /]# curl http://www.google.com/
curl: (6) Could not resolve host: www.google.com; Unknown error
[root@status /]# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.

(hang, I press Ctrl+C to break)

[root@status /]# curl http://192.30.252.129
curl: (7) Failed connect to 192.30.252.129:80; No route to host

run docker run -ti ubuntu /bin/bash on host:

[root@mercury Redis]# docker run -ti ubuntu /bin/bash
Unable to find image 'ubuntu:latest' locally
Trying to pull repository docker.xxx.com/ubuntu ... failed
Trying to pull repository docker-protected.xxx.com/ubuntu ... failed
latest: Pulling from docker.io/ubuntu
6071b4945dcf: Pulling fs layer
6071b4945dcf: Download complete
5bff21ba5409: Download complete
e5855facec0b: Download complete
8251da35e7a7: Download complete
Status: Downloaded newer image for docker.io/ubuntu:latest
root@63840a13cad5:/# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.

(hang..., also Ctrl+C)

[root@mercury Redis]# docker version
Client version: 1.6.2
Client API version: 1.18
Go version (client): go1.4.2
Git commit (client): ba1f6c3/1.6.2
OS/Arch (client): linux/amd64
Server version: 1.6.2
Server API version: 1.18
Go version (server): go1.4.2
Git commit (server): ba1f6c3/1.6.2
OS/Arch (server): linux/amd64
[root@mercury Redis]# docker info
Containers: 18
Images: 142
Storage Driver: devicemapper
 Pool Name: docker-253:1-304602-pool
 Pool Blocksize: 65.54 kB
 Backing Filesystem: extfs
 Data file:
 Metadata file:
 Data Space Used: 3.558 GB
 Data Space Total: 107.4 GB
 Data Space Available: 103.8 GB
 Metadata Space Used: 7.365 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.14 GB
 Udev Sync Supported: true
 Library Version: 1.02.93-RHEL7 (2015-01-28)
Execution Driver: native-0.2
Kernel Version: 3.10.0-229.11.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
CPUs: 2
Total Memory: 1.797 GiB
Name: mercury.xxx.com
ID: V5PO:Z7CC:LTPM:NICT:I2G5:6B6K:AVTP:IOH5:6GNW:JLOK:VCUF:MSY

There is some log messages in my /var/log/messages, I don't know it is related or not:

Aug 15 11:20:02 mercury systemd: Starting Session 5445 of user root.
Aug 15 11:20:02 mercury systemd: Started Session 5445 of user root.
Aug 15 11:20:09 mercury docker: time="2015-08-15T11:20:09Z" level=info msg="Container 6067b64e5538d86adc350b6f61cccf1ce2327d4e781a835e642c8e36d8503534 failed to exit within 10 seconds of SIGTERM - using the force"
Aug 15 11:20:09 mercury kernel: docker0: port 1(veth6388a9a) entered disabled state
Aug 15 11:20:09 mercury kernel: device veth6388a9a left promiscuous mode
Aug 15 11:20:09 mercury kernel: docker0: port 1(veth6388a9a) entered disabled state
Aug 15 11:20:09 mercury docker: time="2015-08-15T11:20:09Z" level=info msg="+job log(die, 6067b64e5538d86adc350b6f61cccf1ce2327d4e781a835e642c8e36d8503534, docker.xxx.com/service/redis:2.8.19-latest)"
Aug 15 11:20:09 mercury docker: time="2015-08-15T11:20:09Z" level=info msg="-job log(die, 6067b64e5538d86adc350b6f61cccf1ce2327d4e781a835e642c8e36d8503534, docker.xxx.com/service/redis:2.8.19-latest) = OK (0)"
Aug 15 11:20:09 mercury docker: time="2015-08-15T11:20:09Z" level=info msg="+job release_interface(6067b64e5538d86adc350b6f61cccf1ce2327d4e781a835e642c8e36d8503534)"
....
Aug 15 11:20:09 mercury docker: time="2015-08-15T11:20:09Z" level=info msg="+job container_inspect(redis-2.8.19)"
Aug 15 11:20:09 mercury NetworkManager[558]: <info>  (veth6388a9a): device state change: activated -> unmanaged (reason 'removed') [100 10 36]
Aug 15 11:20:09 mercury NetworkManager[558]: <info>  (veth6388a9a): deactivating device (reason 'removed') [36]
Aug 15 11:20:09 mercury docker: time="2015-08-15T11:20:09Z" level=info msg="-job release_interface(6067b64e5538d86adc350b6f61cccf1ce2327d4e781a835e642c8e36d8503534) = OK (0)"
Aug 15 11:20:09 mercury NetworkManager[558]: <warn>  (docker0): failed to detach bridge port veth6388a9a
Aug 15 11:20:09 mercury dbus-daemon: dbus[470]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service'
Aug 15 11:20:09 mercury dbus[470]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service'
Aug 15 11:20:09 mercury systemd: Starting Network Manager Script Dispatcher Service...
Aug 15 11:20:09 mercury dbus-daemon: dbus[470]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher'
Aug 15 11:20:09 mercury systemd: Started Network Manager Script Dispatcher Service.
Aug 15 11:20:09 mercury nm-dispatcher: Dispatching action 'down' for veth6388a9a
Aug 15 11:20:09 mercury docker: time="2015-08-15T11:20:09Z" level=info msg="+job log(stop, 6067b64e5538d86adc350b6f61cccf1ce2327d4e781a835e642c8e36d8503534, docker.xxx.com/service/redis:2.8.19-latest)"
Aug 15 11:20:09 mercury docker: time="2015-08-15T11:20:09Z" level=info msg="-job log(stop, 6067b64e5538d86adc350b6f61cccf1ce2327d4e781a835e642c8e36d8503534, docker.xxx.com/service/redis:2.8.19-latest) = OK (0)"
...

If you need any further information, please tell me I would provide it if I can.

@cfpeng

This comment has been minimized.

Show comment
Hide comment
@cfpeng

cfpeng Aug 18, 2015

@aboch I have upgrade to 1.8.1, the issue is still exist.

cfpeng commented Aug 18, 2015

@aboch I have upgrade to 1.8.1, the issue is still exist.

@abronan

This comment has been minimized.

Show comment
Hide comment
@abronan

abronan Aug 18, 2015

Contributor

/cc @LK4D4

Contributor

abronan commented Aug 18, 2015

/cc @LK4D4

@aboch

This comment has been minimized.

Show comment
Hide comment
@aboch

aboch Aug 18, 2015

Contributor

@cfpeng @fkeet Just to make sure, can you please post the content of etc/resolv.conf inside your container.

Also the o/p of sudo iptables -t nat -L -nv on your host. Want to check whether the masquerade rule is there.

Contributor

aboch commented Aug 18, 2015

@cfpeng @fkeet Just to make sure, can you please post the content of etc/resolv.conf inside your container.

Also the o/p of sudo iptables -t nat -L -nv on your host. Want to check whether the masquerade rule is there.

@cfpeng

This comment has been minimized.

Show comment
Hide comment
@cfpeng

cfpeng Aug 18, 2015

@aboch Here is:

$sudo iptables -t nat -L -nv
Chain PREROUTING (policy ACCEPT 1944 packets, 117K bytes)
pkts bytes target prot opt in out source destination
1929 117K DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 1929 packets, 117K bytes)
pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 1126 packets, 69647 bytes)
pkts bytes target prot opt in out source destination
7 497 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 1119 packets, 69150 bytes)
pkts bytes target prot opt in out source destination
22 1365 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0

Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination

$sudo docker run --rm -it ubuntu /bin/bash
root@0aeb261357d1:/# cat /etc/resolv.conf

Generated by resolvconf

nameserver 8.8.8.8

cfpeng commented Aug 18, 2015

@aboch Here is:

$sudo iptables -t nat -L -nv
Chain PREROUTING (policy ACCEPT 1944 packets, 117K bytes)
pkts bytes target prot opt in out source destination
1929 117K DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 1929 packets, 117K bytes)
pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 1126 packets, 69647 bytes)
pkts bytes target prot opt in out source destination
7 497 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 1119 packets, 69150 bytes)
pkts bytes target prot opt in out source destination
22 1365 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0

Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination

$sudo docker run --rm -it ubuntu /bin/bash
root@0aeb261357d1:/# cat /etc/resolv.conf

Generated by resolvconf

nameserver 8.8.8.8

@aanm

This comment has been minimized.

Show comment
Hide comment
@aanm

aanm Aug 21, 2015

Contributor

@cfpeng do you have selinux?

Contributor

aanm commented Aug 21, 2015

@cfpeng do you have selinux?

@cfpeng

This comment has been minimized.

Show comment
Hide comment
@cfpeng

cfpeng commented Aug 24, 2015

@aanand no.

@rcousens

This comment has been minimized.

Show comment
Hide comment
@rcousens

rcousens Aug 25, 2015

I am encountering this issue too. I have to systemctl restart docker on Arch Linux to get access to the internet from within containers.

rcousens commented Aug 25, 2015

I am encountering this issue too. I have to systemctl restart docker on Arch Linux to get access to the internet from within containers.

@ajanssens

This comment has been minimized.

Show comment
Hide comment
@ajanssens

ajanssens Aug 25, 2015

I'm running into the same problem, tried everything found on google but nothing fixed the issue.

$ sudo iptables -t nat -L -nv
Chain PREROUTING (policy ACCEPT 47 packets, 3371 bytes)
pkts bytes target prot opt in out source destination
6 423 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 6 packets, 423 bytes)
pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 1132 packets, 128K bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 1132 packets, 128K bytes)
pkts bytes target prot opt in out source destination
41 2948 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0

Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination

$ sudo docker run --rm -it ubuntu /bin/bash
root@abca1b94e4dc:/# cat /etc/resolv.conf
nameserver 8.8.8.8
nameserver 8.8.4.4

ajanssens commented Aug 25, 2015

I'm running into the same problem, tried everything found on google but nothing fixed the issue.

$ sudo iptables -t nat -L -nv
Chain PREROUTING (policy ACCEPT 47 packets, 3371 bytes)
pkts bytes target prot opt in out source destination
6 423 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 6 packets, 423 bytes)
pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 1132 packets, 128K bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 1132 packets, 128K bytes)
pkts bytes target prot opt in out source destination
41 2948 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0

Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination

$ sudo docker run --rm -it ubuntu /bin/bash
root@abca1b94e4dc:/# cat /etc/resolv.conf
nameserver 8.8.8.8
nameserver 8.8.4.4

@cfpeng

This comment has been minimized.

Show comment
Hide comment
@cfpeng

cfpeng Sep 1, 2015

After I reinstall the OS, the problem is resolved.

cfpeng commented Sep 1, 2015

After I reinstall the OS, the problem is resolved.

@lykhouzov

This comment has been minimized.

Show comment
Hide comment
@lykhouzov

lykhouzov Sep 1, 2015

I had same error.
restart of docker process is helped.
It looks like there were some blocked processes after packages update.

lykhouzov commented Sep 1, 2015

I had same error.
restart of docker process is helped.
It looks like there were some blocked processes after packages update.

@poga

This comment has been minimized.

Show comment
Hide comment
@poga

poga Sep 18, 2015

I've encountered a similar issue. Containers can't access the internet until a manual restart systemctl restart docker on Archlinux.

One thing I've noticed is when my computer just boot up. ip route does not contains the docker0 bridge.

Here's the output before restarting docker:

$ ip route
default via 192.168.0.1 dev wlp2s0  proto static  metric 600
192.168.0.0/24 dev wlp2s0  proto kernel  scope link  src 192.168.0.107  metric 600

After docker restarted:

$ ip route
default via 192.168.0.1 dev wlp2s0  proto static  metric 600
172.17.0.0/16 dev docker0  proto kernel  scope link  src 172.17.42.1
192.168.0.0/24 dev wlp2s0  proto kernel  scope link  src 192.168.0.107  metric 600

not sure if this would help.

poga commented Sep 18, 2015

I've encountered a similar issue. Containers can't access the internet until a manual restart systemctl restart docker on Archlinux.

One thing I've noticed is when my computer just boot up. ip route does not contains the docker0 bridge.

Here's the output before restarting docker:

$ ip route
default via 192.168.0.1 dev wlp2s0  proto static  metric 600
192.168.0.0/24 dev wlp2s0  proto kernel  scope link  src 192.168.0.107  metric 600

After docker restarted:

$ ip route
default via 192.168.0.1 dev wlp2s0  proto static  metric 600
172.17.0.0/16 dev docker0  proto kernel  scope link  src 172.17.42.1
192.168.0.0/24 dev wlp2s0  proto kernel  scope link  src 192.168.0.107  metric 600

not sure if this would help.

@svenmueller

This comment has been minimized.

Show comment
Hide comment
@svenmueller

svenmueller Dec 29, 2015

@aboch Thx for the feedback. How can i do that exactly?

svenmueller commented Dec 29, 2015

@aboch Thx for the feedback. How can i do that exactly?

@aboch

This comment has been minimized.

Show comment
Hide comment
@aboch

aboch Dec 29, 2015

Contributor

@svenmueller

While the ping command is running in your container check the following:

  • Run a tcpdump -i docker0 icmp [-v] to check what icmp packets are going in/out on the docker0 interface
  • Then run it on the interface which connects your host to internet (eth0 ?) to see if packets are sent out and if response are received to/from your host
  • Then run it on the vethxxxx interface which is connecting your container to the linux bridge docker0 (If you don't know which veth interface try few, while the ping runs, so that you can see if you are on the right one)

If you don't see the expected tx/rx after each of the steps above, check repeatedly the iptables filter rules on another shell: iptables -t filter -nvL to see if any pkts counter is increasing for the rules with DROP target. (you may want to | grep -v "0 0" to skim the non hit rules from the o/p)

Contributor

aboch commented Dec 29, 2015

@svenmueller

While the ping command is running in your container check the following:

  • Run a tcpdump -i docker0 icmp [-v] to check what icmp packets are going in/out on the docker0 interface
  • Then run it on the interface which connects your host to internet (eth0 ?) to see if packets are sent out and if response are received to/from your host
  • Then run it on the vethxxxx interface which is connecting your container to the linux bridge docker0 (If you don't know which veth interface try few, while the ping runs, so that you can see if you are on the right one)

If you don't see the expected tx/rx after each of the steps above, check repeatedly the iptables filter rules on another shell: iptables -t filter -nvL to see if any pkts counter is increasing for the rules with DROP target. (you may want to | grep -v "0 0" to skim the non hit rules from the o/p)

@svenmueller

This comment has been minimized.

Show comment
Hide comment
@svenmueller

svenmueller Dec 29, 2015

Thx @aboch for your help. Here is my result:

i can see the outgoing packets on the interfaces, but no reply:

  • docker0
23:23:48.076002 IP (tos 0x0, ttl 64, id 29434, offset 0, flags [DF], proto ICMP (1), length 84)
    172.17.0.2 > google-public-dns-a.google.com: ICMP echo request, id 9984, seq 1118, length 64
  • eth0
23:23:43.074293 IP (tos 0x0, ttl 63, id 28889, offset 0, flags [DF], proto ICMP (1), length 84)
    172.17.0.2 > google-public-dns-a.google.com: ICMP echo request, id 9984, seq 1113, length 64
  • vethxxxx
23:24:26.092552 IP (tos 0x0, ttl 64, id 35006, offset 0, flags [DF], proto ICMP (1), length 84)
    172.17.0.2 > google-public-dns-a.google.com: ICMP echo request, id 9984, seq 1156, length 64

looking at the iptables counts, the only counter which changes is:

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    8   392 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            /* 100v4: forward to DOCKER chain */
    0     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            /* 101v4: accept docker ESTABLISHED,RELATED */ state RELATED,ESTABLISHED
# --- following rules is counting up!
49721 2776K ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0            /* 102v4: accept 
# ---
docker0 outgoing */*
    0     0 ACCEPT     all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0            /* 103v4: accept docker0 local */
    0     0 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* 810v4-drop: drop forward */ reject-with icmp-port-unreachable
    0     0 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* 950v4-basicfw: drop forward */ reject-with icmp-port-unreachable

i can't see another rule with raising counter which might drop the packets. could it somehow be related to the missing gateway setting?

svenmueller commented Dec 29, 2015

Thx @aboch for your help. Here is my result:

i can see the outgoing packets on the interfaces, but no reply:

  • docker0
23:23:48.076002 IP (tos 0x0, ttl 64, id 29434, offset 0, flags [DF], proto ICMP (1), length 84)
    172.17.0.2 > google-public-dns-a.google.com: ICMP echo request, id 9984, seq 1118, length 64
  • eth0
23:23:43.074293 IP (tos 0x0, ttl 63, id 28889, offset 0, flags [DF], proto ICMP (1), length 84)
    172.17.0.2 > google-public-dns-a.google.com: ICMP echo request, id 9984, seq 1113, length 64
  • vethxxxx
23:24:26.092552 IP (tos 0x0, ttl 64, id 35006, offset 0, flags [DF], proto ICMP (1), length 84)
    172.17.0.2 > google-public-dns-a.google.com: ICMP echo request, id 9984, seq 1156, length 64

looking at the iptables counts, the only counter which changes is:

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    8   392 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            /* 100v4: forward to DOCKER chain */
    0     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            /* 101v4: accept docker ESTABLISHED,RELATED */ state RELATED,ESTABLISHED
# --- following rules is counting up!
49721 2776K ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0            /* 102v4: accept 
# ---
docker0 outgoing */*
    0     0 ACCEPT     all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0            /* 103v4: accept docker0 local */
    0     0 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* 810v4-drop: drop forward */ reject-with icmp-port-unreachable
    0     0 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* 950v4-basicfw: drop forward */ reject-with icmp-port-unreachable

i can't see another rule with raising counter which might drop the packets. could it somehow be related to the missing gateway setting?

@aboch

This comment has been minimized.

Show comment
Hide comment
@aboch

aboch Dec 30, 2015

Contributor

@svenmueller

Thanks for the info.
If you do not see any echo response packets even on eth0 interface, then something outside your docker host is blocking the ping.

Contributor

aboch commented Dec 30, 2015

@svenmueller

Thanks for the info.
If you do not see any echo response packets even on eth0 interface, then something outside your docker host is blocking the ping.

@svenmueller

This comment has been minimized.

Show comment
Hide comment
@svenmueller

svenmueller Dec 30, 2015

@aboch the problem is somehow related to the docker setup itself and not caused by problem from outside the host: as soon as i restarted the docker service, the network issue dissapears.

svenmueller commented Dec 30, 2015

@aboch the problem is somehow related to the docker setup itself and not caused by problem from outside the host: as soon as i restarted the docker service, the network issue dissapears.

@anas-aso

This comment has been minimized.

Show comment
Hide comment
@anas-aso

anas-aso Jan 11, 2016

@aboch @svenmueller after digging a little bit more, I found out that the NAT rule in the POSTROUTING chain is missing whenever the "Gateway" key is not in the output of the "docker network inspect bridge" command.
Unfortunately, I can't find why !
Could it be some bug in the docker daemon ? since a restart of the daemon solves the issue.
BTW, this only happens after a reboot of the host.

anas-aso commented Jan 11, 2016

@aboch @svenmueller after digging a little bit more, I found out that the NAT rule in the POSTROUTING chain is missing whenever the "Gateway" key is not in the output of the "docker network inspect bridge" command.
Unfortunately, I can't find why !
Could it be some bug in the docker daemon ? since a restart of the daemon solves the issue.
BTW, this only happens after a reboot of the host.

@anas-aso

This comment has been minimized.

Show comment
Hide comment
@anas-aso

anas-aso Jan 11, 2016

@aboch I want to add that I have this issue on my local install too, using docker-machine (Mac OS)

anas-aso commented Jan 11, 2016

@aboch I want to add that I have this issue on my local install too, using docker-machine (Mac OS)

@mikeatlas

This comment has been minimized.

Show comment
Hide comment
@mikeatlas

mikeatlas Jan 20, 2016

I'm going to add my experience here resolving similar (but perhaps different) issues.

Problem
I can't seem to access DNS in my containers but can from my host. I can access the internet from within my containers with direct IP addresses, but not resolve any DNS addresses.

Solution
I'll post in a subsequent reply, but before that some things to test and share, first:

$ uname -a
Linux mymachine 3.13.0-74-generic #118-Ubuntu SMP \
    Thu Dec 17 22:52:10 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
$ docker info
Containers: 76
Images: 56
Server Version: 1.9.1
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 208
 Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-74-generic
Operating System: Ubuntu 14.04.3 LTS
CPUs: 2
Total Memory: 7.775 GiB
Name: mymachine
ID: VTNY:ETC
$ docker version
Client:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.2
 Git commit:   a34a1d5
 Built:        Fri Nov 20 13:12:04 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.2
 Git commit:   a34a1d5
 Built:        Fri Nov 20 13:12:04 UTC 2015
 OS/Arch:      linux/amd64
$ docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "aec41fdfc6195074c49b330ce04a28e11d72788bc7204fe5f5ae2bf39b642a25",
        "Scope": "local",
        "Driver": "bridge",
        "IPAM": {
            "Driver": "default",
            "Config": [
                {
                    "Subnet": "172.17.0.1/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "9001"
        }
    }
]
$ route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         10.20.10.1     0.0.0.0         UG    0      0        0 wlan0
10.20.192.0     *               255.255.240.0   U     9      0        0 wlan0
172.17.0.0      *               255.255.0.0     U     0      0        0 docker0

Note: Docker check-config.sh is on github/docker/docker/contrib.

$ wget https://raw.githubusercontent.com/docker/docker/master/contrib/check-config.sh
$ chmod +x check-config.sh
$ ./check-config.sh 
warning: /proc/config.gz does not exist, searching other paths for kernel config ...
info: reading kernel config from /boot/config-3.13.0-74-generic ...

Generally Necessary:
- cgroup hierarchy: properly mounted [/sys/fs/cgroup]
- apparmor: enabled and tools installed
- CONFIG_NAMESPACES: enabled
- CONFIG_NET_NS: enabled
- CONFIG_PID_NS: enabled
- CONFIG_IPC_NS: enabled
- CONFIG_UTS_NS: enabled
- CONFIG_DEVPTS_MULTIPLE_INSTANCES: enabled
- CONFIG_CGROUPS: enabled
- CONFIG_CGROUP_CPUACCT: enabled
- CONFIG_CGROUP_DEVICE: enabled
- CONFIG_CGROUP_FREEZER: enabled
- CONFIG_CGROUP_SCHED: enabled
- CONFIG_CPUSETS: enabled
- CONFIG_MEMCG: enabled
- CONFIG_MACVLAN: enabled (as module)
- CONFIG_VETH: enabled (as module)
- CONFIG_BRIDGE: enabled (as module)
- CONFIG_BRIDGE_NETFILTER: enabled
- CONFIG_NF_NAT_IPV4: enabled (as module)
- CONFIG_IP_NF_FILTER: enabled (as module)
- CONFIG_IP_NF_TARGET_MASQUERADE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_ADDRTYPE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_CONNTRACK: enabled (as module)
- CONFIG_NF_NAT: enabled (as module)
- CONFIG_NF_NAT_NEEDED: enabled
- CONFIG_POSIX_MQUEUE: enabled

Optional Features:
- CONFIG_USER_NS: enabled
- CONFIG_SECCOMP: enabled
- CONFIG_MEMCG_KMEM: enabled
- CONFIG_MEMCG_SWAP: enabled
- CONFIG_MEMCG_SWAP_ENABLED: missing
    (note that cgroup swap accounting is not enabled in your
      kernel config, you can enable it by setting boot option "swapaccount=1")
- CONFIG_RESOURCE_COUNTERS: enabled
- CONFIG_BLK_CGROUP: enabled
- CONFIG_IOSCHED_CFQ: enabled
- CONFIG_BLK_DEV_THROTTLING: enabled
- CONFIG_CGROUP_PERF: enabled
- CONFIG_CGROUP_HUGETLB: enabled
- CONFIG_NET_CLS_CGROUP: enabled (as module)
- CONFIG_NETPRIO_CGROUP: enabled (as module)
- CONFIG_CFS_BANDWIDTH: enabled
- CONFIG_FAIR_GROUP_SCHED: enabled
- CONFIG_RT_GROUP_SCHED: missing
- CONFIG_EXT3_FS: missing
- CONFIG_EXT3_FS_XATTR: missing
- CONFIG_EXT3_FS_POSIX_ACL: missing
- CONFIG_EXT3_FS_SECURITY: missing
    (enable these ext3 configs if you are using ext3 as backing filesystem)
- CONFIG_EXT4_FS: enabled
- CONFIG_EXT4_FS_POSIX_ACL: enabled
- CONFIG_EXT4_FS_SECURITY: enabled
- Storage Drivers:
  - "aufs":
    - CONFIG_AUFS_FS: enabled (as module)
  - "btrfs":
    - CONFIG_BTRFS_FS: enabled (as module)
  - "devicemapper":
    - CONFIG_BLK_DEV_DM: enabled
    - CONFIG_DM_THIN_PROVISIONING: enabled (as module)
  - "overlay":
    - CONFIG_OVERLAY_FS: missing
  - "zfs":
    - /dev/zfs: missing
    - zfs command: missing
    - zpool command: missing

And now for some interactive tests (notice: --net=host) allows me to download the image and ping with DNS lookup correctly (but any docker build commands with RUN apt-get git for example die when trying to reach outside world package repos):

$ docker run -it --net=host ubuntu ping -w1 google.com 
PING google.com (4.53.56.119) 56(84) bytes of data.
64 bytes from 4.53.56.119: icmp_seq=1 ttl=59 time=1.23 ms

--- google.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.235/1.235/1.235/0.000 ms

Note that without --net=host, I'm dead in the water for DNS, but if I skip DNS, I can ping Google's DNS server by IP without --net=host:

$ docker run -it ubuntu ping -w1 google.com 
ping: unknown host google.com

$ docker run -it ubuntu ping -w1 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=55 time=6.80 ms

--- 8.8.8.8 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 6.801/6.801/6.801/0.000 ms

Also, partial relevant contents of my /etc/default/docker settings for DOCKER_OPTS (note the 2nd DNS entry is 2nd chance to fall back to my ISP, Comcast DNS server):

DOCKER_OPTS="--dns 8.8.8.8 --dns 75.75.75.76"

Just for kicks, let's make sure I'm really starting the Docker daemon on Ubuntu 14.04 using upstart and not systemd, which the newer choice in Ubuntu 14.10 and Ubuntu 15 and does mislead some people like myself wondering... yes, I'm using upstart, so no need to look any further at #9889. More info on how to determine your Linux init system here.

$ dpkg -S /sbin/init
upstart: /sbin/init

Finally, since I'm on Ubuntu, let's do the thing people suggest often: Delete the docker0 bridge and restart the daemon (it should recreate it). This requires the bridge-utils package and is available on other Linuxes as well.

$ sudo apt-get install bridge-utils -y
$ sudo service docker stop
$ sudo ip link set dev docker0 down
$ sudo brctl delbr docker0
$ sudo service docker start
$ docker network inspect bridge
   [... same as above ^^^ when ran before ]

Okay, next post I will show what did resolve my lack of DNS access from within my containers.

mikeatlas commented Jan 20, 2016

I'm going to add my experience here resolving similar (but perhaps different) issues.

Problem
I can't seem to access DNS in my containers but can from my host. I can access the internet from within my containers with direct IP addresses, but not resolve any DNS addresses.

Solution
I'll post in a subsequent reply, but before that some things to test and share, first:

$ uname -a
Linux mymachine 3.13.0-74-generic #118-Ubuntu SMP \
    Thu Dec 17 22:52:10 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
$ docker info
Containers: 76
Images: 56
Server Version: 1.9.1
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 208
 Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-74-generic
Operating System: Ubuntu 14.04.3 LTS
CPUs: 2
Total Memory: 7.775 GiB
Name: mymachine
ID: VTNY:ETC
$ docker version
Client:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.2
 Git commit:   a34a1d5
 Built:        Fri Nov 20 13:12:04 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.2
 Git commit:   a34a1d5
 Built:        Fri Nov 20 13:12:04 UTC 2015
 OS/Arch:      linux/amd64
$ docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "aec41fdfc6195074c49b330ce04a28e11d72788bc7204fe5f5ae2bf39b642a25",
        "Scope": "local",
        "Driver": "bridge",
        "IPAM": {
            "Driver": "default",
            "Config": [
                {
                    "Subnet": "172.17.0.1/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "9001"
        }
    }
]
$ route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         10.20.10.1     0.0.0.0         UG    0      0        0 wlan0
10.20.192.0     *               255.255.240.0   U     9      0        0 wlan0
172.17.0.0      *               255.255.0.0     U     0      0        0 docker0

Note: Docker check-config.sh is on github/docker/docker/contrib.

$ wget https://raw.githubusercontent.com/docker/docker/master/contrib/check-config.sh
$ chmod +x check-config.sh
$ ./check-config.sh 
warning: /proc/config.gz does not exist, searching other paths for kernel config ...
info: reading kernel config from /boot/config-3.13.0-74-generic ...

Generally Necessary:
- cgroup hierarchy: properly mounted [/sys/fs/cgroup]
- apparmor: enabled and tools installed
- CONFIG_NAMESPACES: enabled
- CONFIG_NET_NS: enabled
- CONFIG_PID_NS: enabled
- CONFIG_IPC_NS: enabled
- CONFIG_UTS_NS: enabled
- CONFIG_DEVPTS_MULTIPLE_INSTANCES: enabled
- CONFIG_CGROUPS: enabled
- CONFIG_CGROUP_CPUACCT: enabled
- CONFIG_CGROUP_DEVICE: enabled
- CONFIG_CGROUP_FREEZER: enabled
- CONFIG_CGROUP_SCHED: enabled
- CONFIG_CPUSETS: enabled
- CONFIG_MEMCG: enabled
- CONFIG_MACVLAN: enabled (as module)
- CONFIG_VETH: enabled (as module)
- CONFIG_BRIDGE: enabled (as module)
- CONFIG_BRIDGE_NETFILTER: enabled
- CONFIG_NF_NAT_IPV4: enabled (as module)
- CONFIG_IP_NF_FILTER: enabled (as module)
- CONFIG_IP_NF_TARGET_MASQUERADE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_ADDRTYPE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_CONNTRACK: enabled (as module)
- CONFIG_NF_NAT: enabled (as module)
- CONFIG_NF_NAT_NEEDED: enabled
- CONFIG_POSIX_MQUEUE: enabled

Optional Features:
- CONFIG_USER_NS: enabled
- CONFIG_SECCOMP: enabled
- CONFIG_MEMCG_KMEM: enabled
- CONFIG_MEMCG_SWAP: enabled
- CONFIG_MEMCG_SWAP_ENABLED: missing
    (note that cgroup swap accounting is not enabled in your
      kernel config, you can enable it by setting boot option "swapaccount=1")
- CONFIG_RESOURCE_COUNTERS: enabled
- CONFIG_BLK_CGROUP: enabled
- CONFIG_IOSCHED_CFQ: enabled
- CONFIG_BLK_DEV_THROTTLING: enabled
- CONFIG_CGROUP_PERF: enabled
- CONFIG_CGROUP_HUGETLB: enabled
- CONFIG_NET_CLS_CGROUP: enabled (as module)
- CONFIG_NETPRIO_CGROUP: enabled (as module)
- CONFIG_CFS_BANDWIDTH: enabled
- CONFIG_FAIR_GROUP_SCHED: enabled
- CONFIG_RT_GROUP_SCHED: missing
- CONFIG_EXT3_FS: missing
- CONFIG_EXT3_FS_XATTR: missing
- CONFIG_EXT3_FS_POSIX_ACL: missing
- CONFIG_EXT3_FS_SECURITY: missing
    (enable these ext3 configs if you are using ext3 as backing filesystem)
- CONFIG_EXT4_FS: enabled
- CONFIG_EXT4_FS_POSIX_ACL: enabled
- CONFIG_EXT4_FS_SECURITY: enabled
- Storage Drivers:
  - "aufs":
    - CONFIG_AUFS_FS: enabled (as module)
  - "btrfs":
    - CONFIG_BTRFS_FS: enabled (as module)
  - "devicemapper":
    - CONFIG_BLK_DEV_DM: enabled
    - CONFIG_DM_THIN_PROVISIONING: enabled (as module)
  - "overlay":
    - CONFIG_OVERLAY_FS: missing
  - "zfs":
    - /dev/zfs: missing
    - zfs command: missing
    - zpool command: missing

And now for some interactive tests (notice: --net=host) allows me to download the image and ping with DNS lookup correctly (but any docker build commands with RUN apt-get git for example die when trying to reach outside world package repos):

$ docker run -it --net=host ubuntu ping -w1 google.com 
PING google.com (4.53.56.119) 56(84) bytes of data.
64 bytes from 4.53.56.119: icmp_seq=1 ttl=59 time=1.23 ms

--- google.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.235/1.235/1.235/0.000 ms

Note that without --net=host, I'm dead in the water for DNS, but if I skip DNS, I can ping Google's DNS server by IP without --net=host:

$ docker run -it ubuntu ping -w1 google.com 
ping: unknown host google.com

$ docker run -it ubuntu ping -w1 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=55 time=6.80 ms

--- 8.8.8.8 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 6.801/6.801/6.801/0.000 ms

Also, partial relevant contents of my /etc/default/docker settings for DOCKER_OPTS (note the 2nd DNS entry is 2nd chance to fall back to my ISP, Comcast DNS server):

DOCKER_OPTS="--dns 8.8.8.8 --dns 75.75.75.76"

Just for kicks, let's make sure I'm really starting the Docker daemon on Ubuntu 14.04 using upstart and not systemd, which the newer choice in Ubuntu 14.10 and Ubuntu 15 and does mislead some people like myself wondering... yes, I'm using upstart, so no need to look any further at #9889. More info on how to determine your Linux init system here.

$ dpkg -S /sbin/init
upstart: /sbin/init

Finally, since I'm on Ubuntu, let's do the thing people suggest often: Delete the docker0 bridge and restart the daemon (it should recreate it). This requires the bridge-utils package and is available on other Linuxes as well.

$ sudo apt-get install bridge-utils -y
$ sudo service docker stop
$ sudo ip link set dev docker0 down
$ sudo brctl delbr docker0
$ sudo service docker start
$ docker network inspect bridge
   [... same as above ^^^ when ran before ]

Okay, next post I will show what did resolve my lack of DNS access from within my containers.

@mikeatlas

This comment has been minimized.

Show comment
Hide comment
@mikeatlas

mikeatlas Jan 20, 2016

Okay, next day, same computer, same exact setup, but different network, and a new idea from this thread to try.

I also recommend reading the latest Docker Networking guide fully:

https://docs.docker.com/engine/userguide/networking/dockernetworks/

Note: I used explicit --dns=X.X.X.X of my office internet ISP instead of --net=host, and look at this!

$ docker run -it --dns=10.20.100.1 ubuntu ping -w1 google.com 
PING google.com (4.53.56.123) 56(84) bytes of data.
64 bytes from 4.53.56.123: icmp_seq=1 ttl=58 time=1.44 ms

--- google.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.440/1.440/1.440/0.000 ms

Whoa! And then someone else here mentioned --iptables, so let me try putting those in my DOCKER_OPTS:

DOCKER_OPTS="--iptables=true --dns=10.20.100.1 --dns=8.8.8.8"

I also noticed one thing here that I haven't isolated yet: Some documentation and discussion about these --dns flags use the = sign, some do not. I have no idea if it matters, but I restarted the daemon anyways:

$ sudo service docker restart

And then took a deep breath:

$ docker run -it ubuntu ping -w1 google.com 
PING google.com (4.53.56.123) 56(84) bytes of data.
64 bytes from 4.53.56.123: icmp_seq=1 ttl=58 time=1.35 ms

--- google.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.353/1.353/1.353/0.000 ms

Wow. I'm in business. And I can run docker build commands which include things like RUN apt-get install git just fine.

I will possibly post one more here in this thread (or not), but I intend to re-verify from my home network if there is any difference or that it really was just my configuration changes (--iptables=true and using --dns=X with = sign as well as my office DNS first).

I hope some of these two posts help others track down or try different ideas to investigate their problems. The above tips got me past my problem after banging my head for way too long on this strange matter.

mikeatlas commented Jan 20, 2016

Okay, next day, same computer, same exact setup, but different network, and a new idea from this thread to try.

I also recommend reading the latest Docker Networking guide fully:

https://docs.docker.com/engine/userguide/networking/dockernetworks/

Note: I used explicit --dns=X.X.X.X of my office internet ISP instead of --net=host, and look at this!

$ docker run -it --dns=10.20.100.1 ubuntu ping -w1 google.com 
PING google.com (4.53.56.123) 56(84) bytes of data.
64 bytes from 4.53.56.123: icmp_seq=1 ttl=58 time=1.44 ms

--- google.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.440/1.440/1.440/0.000 ms

Whoa! And then someone else here mentioned --iptables, so let me try putting those in my DOCKER_OPTS:

DOCKER_OPTS="--iptables=true --dns=10.20.100.1 --dns=8.8.8.8"

I also noticed one thing here that I haven't isolated yet: Some documentation and discussion about these --dns flags use the = sign, some do not. I have no idea if it matters, but I restarted the daemon anyways:

$ sudo service docker restart

And then took a deep breath:

$ docker run -it ubuntu ping -w1 google.com 
PING google.com (4.53.56.123) 56(84) bytes of data.
64 bytes from 4.53.56.123: icmp_seq=1 ttl=58 time=1.35 ms

--- google.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.353/1.353/1.353/0.000 ms

Wow. I'm in business. And I can run docker build commands which include things like RUN apt-get install git just fine.

I will possibly post one more here in this thread (or not), but I intend to re-verify from my home network if there is any difference or that it really was just my configuration changes (--iptables=true and using --dns=X with = sign as well as my office DNS first).

I hope some of these two posts help others track down or try different ideas to investigate their problems. The above tips got me past my problem after banging my head for way too long on this strange matter.

@mikeatlas

This comment has been minimized.

Show comment
Hide comment
@mikeatlas

mikeatlas Jan 21, 2016

So, it really was an issue at home with my home WiFi router. Really unbelievable, but my D-Link DIR-655 has a built-in DNS server feature, which is described as "Advanced DNS is a free security option that provides Anti-Phishing to protect your Internet connection from fraud and navigation improvements such as auto-correction of common URL typos."

I turned that garbage off and specified Google's 8.8.8.8 as the primary DNS server and Comcast (my home ISP's primary DNS server) as my secondary, rebooted it, and I can run the Docker daemon correctly at home. I've never had any kind of DNS issues with any other devices connected to this router until now; so I wonder what exactly the issue is with it not liking DNS requests coming from Docker??

Screenshot below.

DNS issues could be your router!

mikeatlas commented Jan 21, 2016

So, it really was an issue at home with my home WiFi router. Really unbelievable, but my D-Link DIR-655 has a built-in DNS server feature, which is described as "Advanced DNS is a free security option that provides Anti-Phishing to protect your Internet connection from fraud and navigation improvements such as auto-correction of common URL typos."

I turned that garbage off and specified Google's 8.8.8.8 as the primary DNS server and Comcast (my home ISP's primary DNS server) as my secondary, rebooted it, and I can run the Docker daemon correctly at home. I've never had any kind of DNS issues with any other devices connected to this router until now; so I wonder what exactly the issue is with it not liking DNS requests coming from Docker??

Screenshot below.

DNS issues could be your router!

@mikeatlas

This comment has been minimized.

Show comment
Hide comment
@mikeatlas

mikeatlas Jan 26, 2016

For what it's worth, the Advanced DNS feature of the D-Link DIR-655 is fully described as:

Advanced DNS :

DNS stands for Domain Name System. The DNS servers act as a phonebook and translate the human-friendly domain name into its corresponding IP address. Advanced DNS Services for D-Link is powered by Best Path Networks, a subsidiary of OpenDNS that provides anti-phishing and DNS services to partners like D-Link. OpenDNS is the world’s largest and fastest-growing provider of free security and DNS infrastructure services. Advanced DNS Services makes your online experience safer and your Internet overall faster and more reliable.

The DNS platform is designed to not interfere with any specific protocol. However, a small subset of spam filtering solutions may be confused by receiving search responses for domains that do not exist. It is recommended that the enhanced search experience be disabled for clients that operate an on-site mail server. DNS does not affect upload or download speeds. These are controlled exclusively by your Internet Service Provider. D-Link and Best Path Networks do not collect or store any personally identifiable DNS information about Advanced DNS Services users.

Your search results are powered by Yahoo. The search function provides you with a much more fluid browsing experience. When a site cannot be reached, or a site does not exist, we will provide you with search suggestions instead of the generic error message displayed by your browser. We also automatically correct some of the common typos users make in the address bar. The typo-correction feature only works for top level domains that have been misspelled, such .cmo and .ogr. Sometimes you might be mis-directed to the search results page. If you clicked on a link in a spam email it is quite possible that the site has been disabled for abuse. Because the site no longer exists you may receive our search page.

I'm wondering what Best Path Networks' particular problem could be. OpenDNS' engineering blog posts about Docker frequently, maybe when I have more time to dig into this I'll track what exactly in Best Path Network's filtering is causing false-positive filtering on my home router.

mikeatlas commented Jan 26, 2016

For what it's worth, the Advanced DNS feature of the D-Link DIR-655 is fully described as:

Advanced DNS :

DNS stands for Domain Name System. The DNS servers act as a phonebook and translate the human-friendly domain name into its corresponding IP address. Advanced DNS Services for D-Link is powered by Best Path Networks, a subsidiary of OpenDNS that provides anti-phishing and DNS services to partners like D-Link. OpenDNS is the world’s largest and fastest-growing provider of free security and DNS infrastructure services. Advanced DNS Services makes your online experience safer and your Internet overall faster and more reliable.

The DNS platform is designed to not interfere with any specific protocol. However, a small subset of spam filtering solutions may be confused by receiving search responses for domains that do not exist. It is recommended that the enhanced search experience be disabled for clients that operate an on-site mail server. DNS does not affect upload or download speeds. These are controlled exclusively by your Internet Service Provider. D-Link and Best Path Networks do not collect or store any personally identifiable DNS information about Advanced DNS Services users.

Your search results are powered by Yahoo. The search function provides you with a much more fluid browsing experience. When a site cannot be reached, or a site does not exist, we will provide you with search suggestions instead of the generic error message displayed by your browser. We also automatically correct some of the common typos users make in the address bar. The typo-correction feature only works for top level domains that have been misspelled, such .cmo and .ogr. Sometimes you might be mis-directed to the search results page. If you clicked on a link in a spam email it is quite possible that the site has been disabled for abuse. Because the site no longer exists you may receive our search page.

I'm wondering what Best Path Networks' particular problem could be. OpenDNS' engineering blog posts about Docker frequently, maybe when I have more time to dig into this I'll track what exactly in Best Path Network's filtering is causing false-positive filtering on my home router.

@Drewch

This comment has been minimized.

Show comment
Hide comment
@Drewch

Drewch Jan 27, 2016

I have the same DNS problem and I set /etc/resolv.conf to nameserver 8.8.8.8 and it immediately fixes my issue.

Drewch commented Jan 27, 2016

I have the same DNS problem and I set /etc/resolv.conf to nameserver 8.8.8.8 and it immediately fixes my issue.

@anas-aso

This comment has been minimized.

Show comment
Hide comment
@anas-aso

anas-aso Jan 27, 2016

I still have the same issue, but it doesn't have anything to do with the DNS. It's just the NATing rule in the POSTROUTING chain of NAT table which is missing. Once I restart docker daemon or add it manually, my containers can access internet again. But both of the solutions are just workarounds ! Restarting docker daemon will kill the running containers and adding the rule may cause other issues (overwriting other rules or just make them useless, etc ...).

anas-aso commented Jan 27, 2016

I still have the same issue, but it doesn't have anything to do with the DNS. It's just the NATing rule in the POSTROUTING chain of NAT table which is missing. Once I restart docker daemon or add it manually, my containers can access internet again. But both of the solutions are just workarounds ! Restarting docker daemon will kill the running containers and adding the rule may cause other issues (overwriting other rules or just make them useless, etc ...).

@stuart-warren

This comment has been minimized.

Show comment
Hide comment
@stuart-warren

stuart-warren Feb 10, 2016

We have this similar problem too.
It works fine after a restart of the daemon for a number of hours but then something with dns resolution breaks.
Docker 1.9.1

Current theory is iptables rules getting overridden/removed by a puppet module running (via cron) on the hosts. Does that sound plausible for anyone else?

stuart-warren commented Feb 10, 2016

We have this similar problem too.
It works fine after a restart of the daemon for a number of hours but then something with dns resolution breaks.
Docker 1.9.1

Current theory is iptables rules getting overridden/removed by a puppet module running (via cron) on the hosts. Does that sound plausible for anyone else?

@lucian303

This comment has been minimized.

Show comment
Hide comment
@lucian303

lucian303 Mar 21, 2016

I was getting this problem while building images and running containers on version 1.10.3, build 20f81dd and docker-machine version 0.6.0, build e27fb87. Both docker and docker-machine have their default configs. Restarting docker machine seems to be the only thing I tried that fixes it. Not sure if this will come back again as this is a fresh install of docker/docker-machine from Friday that worked fine then and didn't work at all until the restart today.

lucian303 commented Mar 21, 2016

I was getting this problem while building images and running containers on version 1.10.3, build 20f81dd and docker-machine version 0.6.0, build e27fb87. Both docker and docker-machine have their default configs. Restarting docker machine seems to be the only thing I tried that fixes it. Not sure if this will come back again as this is a fresh install of docker/docker-machine from Friday that worked fine then and didn't work at all until the restart today.

@mikemucc

This comment has been minimized.

Show comment
Hide comment
@mikemucc

mikemucc Mar 31, 2016

We are having the same issue. After about 16-19 hours of uptime (longer on the weekends), the container goes into a state where it cannot talk to the outside world. Restarting the container or the docker daemon (which in turn restarts the container) will bring everything back to operating properly... for the next 16-19 hours or so. Inside the container is a node.js webapp.

mikemucc commented Mar 31, 2016

We are having the same issue. After about 16-19 hours of uptime (longer on the weekends), the container goes into a state where it cannot talk to the outside world. Restarting the container or the docker daemon (which in turn restarts the container) will bring everything back to operating properly... for the next 16-19 hours or so. Inside the container is a node.js webapp.

@simper

This comment has been minimized.

Show comment
Hide comment
@simper

simper Apr 8, 2016

I found another possible problem, the privilege of container /etc/resolv.conf sometimes get changed to 600 weirdly, after docker restart or some special steps I am not sure, so for applications without root privilege will not be able to resolve the domain name, manually change it back to 644 will recover the issue. I encountered this issue in synology NAS DSM5.2 several times, finally figured out the problem...

simper commented Apr 8, 2016

I found another possible problem, the privilege of container /etc/resolv.conf sometimes get changed to 600 weirdly, after docker restart or some special steps I am not sure, so for applications without root privilege will not be able to resolve the domain name, manually change it back to 644 will recover the issue. I encountered this issue in synology NAS DSM5.2 several times, finally figured out the problem...

@terlar

This comment has been minimized.

Show comment
Hide comment
@terlar

terlar Apr 18, 2016

@poga , @bwinterton , @fbourigault : Did you guys find any solution for running arch linux systemd-networkd together with docker?

I have been relying on this restarting docker service to fix the ip route. However it is a bit emarrassing to have people laugh at me with these network issues. Also I just ran into another issue today which I believe might have the same root cause, but not sure. Even after restarting the docker service it doesn't seem to work with docker composer version 2 config that runs another docker network besides the regular one. I just cannot reach those IP addresses in that network. Just having a hunch that it might be related, might be totally wrong.

terlar commented Apr 18, 2016

@poga , @bwinterton , @fbourigault : Did you guys find any solution for running arch linux systemd-networkd together with docker?

I have been relying on this restarting docker service to fix the ip route. However it is a bit emarrassing to have people laugh at me with these network issues. Also I just ran into another issue today which I believe might have the same root cause, but not sure. Even after restarting the docker service it doesn't seem to work with docker composer version 2 config that runs another docker network besides the regular one. I just cannot reach those IP addresses in that network. Just having a hunch that it might be related, might be totally wrong.

@gregath

This comment has been minimized.

Show comment
Hide comment
@gregath

gregath Jul 6, 2016

I'm having similar network issues as well, immediately after restarting docker I'm able to access the internet from the container but after about a dozen attempts it fails (see below). This is reproducible every time. However the network failure doesn't occur if I start a container in the success state and keep it running the whole time. 'ip route', 'iptables -L', and 'docker network inspect bridge' are identical in both the success and failed states.

...
$ docker run --rm busybox wget http://216.58.219.228
Connecting to 216.58.219.228 (216.58.219.228:80)
Connecting to www.google.com (216.58.192.164:80)
index.html 100% |*******************************| 10432 0:00:00 ETA
$ docker run --rm busybox wget http://216.58.219.228
Connecting to 216.58.219.228 (216.58.219.228:80)
Connecting to www.google.com (216.58.192.164:80)
index.html 100% |*******************************| 10422 0:00:00 ETA
$ docker run --rm busybox wget http://216.58.219.228
Connecting to 216.58.219.228 (216.58.219.228:80)
wget: can't connect to remote host (216.58.219.228): No route to host

$ docker version
Client:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 21:23:39 2016
OS/Arch: linux/amd64

Server:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 21:23:39 2016
OS/Arch: linux/amd64
$ uname -a
Linux 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-2+deb8u3 (2016-07-02) x86_64 GNU/Linux

Log of container with network failure
Jul 06 13:47:01 kernel: aufs au_opts_verify:1570:docker[21760]: dirperm1 breaks the protection by the permission bits on the lower branch
Jul 06 13:47:02 kernel: aufs au_opts_verify:1570:docker[21760]: dirperm1 breaks the protection by the permission bits on the lower branch
Jul 06 13:47:02 kernel: aufs au_opts_verify:1570:docker[21545]: dirperm1 breaks the protection by the permission bits on the lower branch
Jul 06 13:47:02 NetworkManager[667]: (vethf44825d): device is virtual, marking as unmanaged
Jul 06 13:47:02 kernel: device veth370000a entered promiscuous mode
Jul 06 13:47:02 kernel: IPv6: ADDRCONF(NETDEV_UP): veth370000a: link is not ready
Jul 06 13:47:02 NetworkManager[667]: (vethf44825d): carrier is OFF
Jul 06 13:47:02 NetworkManager[667]: (vethf44825d): new Veth device (driver: 'unknown' ifindex: 582)
Jul 06 13:47:02 NetworkManager[667]: (vethf44825d): exported as /org/freedesktop/NetworkManager/Devices/870
Jul 06 13:47:02 NetworkManager[667]: (veth370000a): device is virtual, marking as unmanaged
Jul 06 13:47:02 NetworkManager[667]: (veth370000a): carrier is OFF
Jul 06 13:47:02 NetworkManager[667]: (veth370000a): new Veth device (driver: 'unknown' ifindex: 583)
Jul 06 13:47:02 NetworkManager[667]: (veth370000a): exported as /org/freedesktop/NetworkManager/Devices/871
Jul 06 13:47:02 NetworkManager[667]: (veth370000a): device state change: unmanaged -> unavailable (reason 'connection-assumed') [10 20 41]
Jul 06 13:47:02 NetworkManager[667]: (docker0): bridge port veth370000a was attached
Jul 06 13:47:02 NetworkManager[667]: (veth370000a): enslaved to docker0
Jul 06 13:47:02 NetworkManager[667]: (veth370000a): device state change: unavailable -> disconnected (reason 'connection-assumed') [20 30 41]
Jul 06 13:47:02 NetworkManager[667]: Activation (veth370000a) starting connection 'veth370000a'
Jul 06 13:47:02 NetworkManager[667]: Activation (veth370000a) Stage 1 of 5 (Device Prepare) scheduled...
Jul 06 13:47:02 NetworkManager[667]: devices added (path: /sys/devices/virtual/net/vethf44825d, iface: vethf44825d)
Jul 06 13:47:02 NetworkManager[667]: device added (path: /sys/devices/virtual/net/vethf44825d, iface: vethf44825d): no ifupdown configuration found.
Jul 06 13:47:02 NetworkManager[667]: devices added (path: /sys/devices/virtual/net/veth370000a, iface: veth370000a)
Jul 06 13:47:02 NetworkManager[667]: device added (path: /sys/devices/virtual/net/veth370000a, iface: veth370000a): no ifupdown configuration found.
Jul 06 13:47:02 NetworkManager[667]: Activation (veth370000a) Stage 1 of 5 (Device Prepare) started...
Jul 06 13:47:02 NetworkManager[667]: (veth370000a): device state change: disconnected -> prepare (reason 'none') [30 40 0]
Jul 06 13:47:02 NetworkManager[667]: Activation (veth370000a) Stage 2 of 5 (Device Configure) scheduled...
Jul 06 13:47:02 NetworkManager[667]: Activation (veth370000a) Stage 1 of 5 (Device Prepare) complete.
Jul 06 13:47:02 NetworkManager[667]: Activation (veth370000a) Stage 2 of 5 (Device Configure) starting...
Jul 06 13:47:02 NetworkManager[667]: (veth370000a): device state change: prepare -> config (reason 'none') [40 50 0]
Jul 06 13:47:02 NetworkManager[667]: Activation (veth370000a) Stage 2 of 5 (Device Configure) successful.
Jul 06 13:47:02 NetworkManager[667]: Activation (veth370000a) Stage 3 of 5 (IP Configure Start) scheduled.
Jul 06 13:47:02 NetworkManager[667]: Activation (veth370000a) Stage 2 of 5 (Device Configure) complete.
Jul 06 13:47:02 NetworkManager[667]: Activation (veth370000a) Stage 3 of 5 (IP Configure Start) started...
Jul 06 13:47:02 NetworkManager[667]: (veth370000a): device state change: config -> ip-config (reason 'none') [50 70 0]
Jul 06 13:47:02 NetworkManager[667]: Activation (veth370000a) Stage 3 of 5 (IP Configure Start) complete.
Jul 06 13:47:02 NetworkManager[667]: (veth370000a): device state change: ip-config -> secondaries (reason 'none') [70 90 0]
Jul 06 13:47:02 NetworkManager[667]: (veth370000a): device state change: secondaries -> activated (reason 'none') [90 100 0]
Jul 06 13:47:02 NetworkManager[667]: Activation (veth370000a) successful, device activated.
Jul 06 13:47:02 dbus[689]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service'
Jul 06 13:47:02 dbus[689]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher'
Jul 06 13:47:02 nm-dispatcher[23880]: Dispatching action 'up' for veth370000a
Jul 06 13:47:02 ntpdate[23922]: the NTP socket is in use, exiting
Jul 06 13:47:02 sshd[664]: Received SIGHUP; restarting.
Jul 06 13:47:02 sshd[664]: Server listening on 0.0.0.0 port 22.
Jul 06 13:47:02 sshd[664]: Server listening on :: port 22.
Jul 06 13:47:02 avahi-daemon[688]: Withdrawing workstation service for vethf44825d.
Jul 06 13:47:02 NetworkManager[667]: devices removed (path: /sys/devices/virtual/net/vethf44825d, iface: vethf44825d)
Jul 06 13:47:02 NetworkManager[667]: (veth370000a): link connected
Jul 06 13:47:02 NetworkManager[667]: (docker0): link connected
Jul 06 13:47:02 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth370000a: link becomes ready
Jul 06 13:47:02 kernel: docker0: port 1(veth370000a) entered forwarding state
Jul 06 13:47:02 kernel: docker0: port 1(veth370000a) entered forwarding state
Jul 06 13:47:02 docker[21542]: time="2016-07-06T13:47:02-05:00" level=error msg="containerd: notify OOM events" error="open memory.oom_control: no such file or directory"
Jul 06 13:47:03 avahi-daemon[688]: Joining mDNS multicast group on interface veth370000a.IPv6 with address fe80::fcd1:2cff:fe1f:65c6.
Jul 06 13:47:03 avahi-daemon[688]: New relevant interface veth370000a.IPv6 for mDNS.
Jul 06 13:47:03 avahi-daemon[688]: Registering new address record for fe80::fcd1:2cff:fe1f:65c6 on veth370000a.*.
Jul 06 13:47:05 ntpd[735]: Listen normally on 51 veth370000a fe80::fcd1:2cff:fe1f:65c6 UDP 123
Jul 06 13:47:05 ntpd[735]: peers refreshed
Jul 06 13:47:05 NetworkManager[667]: (vethf44825d): device is virtual, marking as unmanaged
Jul 06 13:47:05 NetworkManager[667]: (vethf44825d): carrier is OFF
Jul 06 13:47:05 NetworkManager[667]: (vethf44825d): new Veth device (driver: 'unknown' ifindex: 582)
Jul 06 13:47:05 NetworkManager[667]: (vethf44825d): exported as /org/freedesktop/NetworkManager/Devices/872
Jul 06 13:47:05 kernel: docker0: port 1(veth370000a) entered disabled state
Jul 06 13:47:06 NetworkManager[667]: (veth370000a): link disconnected (deferring action for 4 seconds)
Jul 06 13:47:06 NetworkManager[667]: devices added (path: /sys/devices/virtual/net/vethf44825d, iface: vethf44825d)
Jul 06 13:47:06 NetworkManager[667]: device added (path: /sys/devices/virtual/net/vethf44825d, iface: vethf44825d): no ifupdown configuration found.
Jul 06 13:47:06 avahi-daemon[688]: Interface veth370000a.IPv6 no longer relevant for mDNS.
Jul 06 13:47:06 avahi-daemon[688]: Leaving mDNS multicast group on interface veth370000a.IPv6 with address fe80::fcd1:2cff:fe1f:65c6.
Jul 06 13:47:06 avahi-daemon[688]: Withdrawing address record for fe80::fcd1:2cff:fe1f:65c6 on veth370000a.
Jul 06 13:47:06 avahi-daemon[688]: Withdrawing workstation service for vethf44825d.
Jul 06 13:47:06 avahi-daemon[688]: Withdrawing workstation service for veth370000a.
Jul 06 13:47:06 kernel: docker0: port 1(veth370000a) entered disabled state
Jul 06 13:47:06 kernel: device veth370000a left promiscuous mode
Jul 06 13:47:06 kernel: docker0: port 1(veth370000a) entered disabled state
Jul 06 13:47:06 NetworkManager[667]: devices removed (path: /sys/devices/virtual/net/vethf44825d, iface: vethf44825d)
Jul 06 13:47:06 NetworkManager[667]: devices removed (path: /sys/devices/virtual/net/veth370000a, iface: veth370000a)
Jul 06 13:47:06 NetworkManager[667]: (veth370000a): device state change: activated -> unmanaged (reason 'removed') [100 10 36]
Jul 06 13:47:06 NetworkManager[667]: (veth370000a): deactivating device (reason 'removed') [36]
Jul 06 13:47:06 NetworkManager[667]: (docker0): failed to detach bridge port veth370000a
Jul 06 13:47:06 NetworkManager[667]: nm_device_get_iface: assertion 'self != NULL' failed
Jul 06 13:47:06 NetworkManager[667]: (veth370000a): released from master (null)
Jul 06 13:47:06 NetworkManager[667]: (docker0): link disconnected (deferring action for 4 seconds)
Jul 06 13:47:06 nm-dispatcher[23880]: Dispatching action 'down' for veth370000a
Jul 06 13:47:08 ntpd[735]: Deleting interface #51 veth370000a, fe80::fcd1:2cff:fe1f:65c6#123, interface stats: received=0, sent=0, dropped=0, active_time=3 secs
Jul 06 13:47:08 ntpd[735]: peers refreshed
Jul 06 13:47:10 NetworkManager[667]: (docker0): link disconnected (calling deferred action)

gregath commented Jul 6, 2016

I'm having similar network issues as well, immediately after restarting docker I'm able to access the internet from the container but after about a dozen attempts it fails (see below). This is reproducible every time. However the network failure doesn't occur if I start a container in the success state and keep it running the whole time. 'ip route', 'iptables -L', and 'docker network inspect bridge' are identical in both the success and failed states.

...
$ docker run --rm busybox wget http://216.58.219.228
Connecting to 216.58.219.228 (216.58.219.228:80)
Connecting to www.google.com (216.58.192.164:80)
index.html 100% |*******************************| 10432 0:00:00 ETA
$ docker run --rm busybox wget http://216.58.219.228
Connecting to 216.58.219.228 (216.58.219.228:80)
Connecting to www.google.com (216.58.192.164:80)
index.html 100% |*******************************| 10422 0:00:00 ETA
$ docker run --rm busybox wget http://216.58.219.228
Connecting to 216.58.219.228 (216.58.219.228:80)
wget: can't connect to remote host (216.58.219.228): No route to host

$ docker version
Client:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 21:23:39 2016
OS/Arch: linux/amd64

Server:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 21:23:39 2016
OS/Arch: linux/amd64
$ uname -a
Linux 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-2+deb8u3 (2016-07-02) x86_64 GNU/Linux

Log of container with network failure
Jul 06 13:47:01 kernel: aufs au_opts_verify:1570:docker[21760]: dirperm1 breaks the protection by the permission bits on the lower branch
Jul 06 13:47:02 kernel: aufs au_opts_verify:1570:docker[21760]: dirperm1 breaks the protection by the permission bits on the lower branch
Jul 06 13:47:02 kernel: aufs au_opts_verify:1570:docker[21545]: dirperm1 breaks the protection by the permission bits on the lower branch
Jul 06 13:47:02 NetworkManager[667]: (vethf44825d): device is virtual, marking as unmanaged
Jul 06 13:47:02 kernel: device veth370000a entered promiscuous mode
Jul 06 13:47:02 kernel: IPv6: ADDRCONF(NETDEV_UP): veth370000a: link is not ready
Jul 06 13:47:02 NetworkManager[667]: (vethf44825d): carrier is OFF
Jul 06 13:47:02 NetworkManager[667]: (vethf44825d): new Veth device (driver: 'unknown' ifindex: 582)
Jul 06 13:47:02 NetworkManager[667]: (vethf44825d): exported as /org/freedesktop/NetworkManager/Devices/870
Jul 06 13:47:02 NetworkManager[667]: (veth370000a): device is virtual, marking as unmanaged
Jul 06 13:47:02 NetworkManager[667]: (veth370000a): carrier is OFF
Jul 06 13:47:02 NetworkManager[667]: (veth370000a): new Veth device (driver: 'unknown' ifindex: 583)
Jul 06 13:47:02 NetworkManager[667]: (veth370000a): exported as /org/freedesktop/NetworkManager/Devices/871
Jul 06 13:47:02 NetworkManager[667]: (veth370000a): device state change: unmanaged -> unavailable (reason 'connection-assumed') [10 20 41]
Jul 06 13:47:02 NetworkManager[667]: (docker0): bridge port veth370000a was attached
Jul 06 13:47:02 NetworkManager[667]: (veth370000a): enslaved to docker0
Jul 06 13:47:02 NetworkManager[667]: (veth370000a): device state change: unavailable -> disconnected (reason 'connection-assumed') [20 30 41]
Jul 06 13:47:02 NetworkManager[667]: Activation (veth370000a) starting connection 'veth370000a'
Jul 06 13:47:02 NetworkManager[667]: Activation (veth370000a) Stage 1 of 5 (Device Prepare) scheduled...
Jul 06 13:47:02 NetworkManager[667]: devices added (path: /sys/devices/virtual/net/vethf44825d, iface: vethf44825d)
Jul 06 13:47:02 NetworkManager[667]: device added (path: /sys/devices/virtual/net/vethf44825d, iface: vethf44825d): no ifupdown configuration found.
Jul 06 13:47:02 NetworkManager[667]: devices added (path: /sys/devices/virtual/net/veth370000a, iface: veth370000a)
Jul 06 13:47:02 NetworkManager[667]: device added (path: /sys/devices/virtual/net/veth370000a, iface: veth370000a): no ifupdown configuration found.
Jul 06 13:47:02 NetworkManager[667]: Activation (veth370000a) Stage 1 of 5 (Device Prepare) started...
Jul 06 13:47:02 NetworkManager[667]: (veth370000a): device state change: disconnected -> prepare (reason 'none') [30 40 0]
Jul 06 13:47:02 NetworkManager[667]: Activation (veth370000a) Stage 2 of 5 (Device Configure) scheduled...
Jul 06 13:47:02 NetworkManager[667]: Activation (veth370000a) Stage 1 of 5 (Device Prepare) complete.
Jul 06 13:47:02 NetworkManager[667]: Activation (veth370000a) Stage 2 of 5 (Device Configure) starting...
Jul 06 13:47:02 NetworkManager[667]: (veth370000a): device state change: prepare -> config (reason 'none') [40 50 0]
Jul 06 13:47:02 NetworkManager[667]: Activation (veth370000a) Stage 2 of 5 (Device Configure) successful.
Jul 06 13:47:02 NetworkManager[667]: Activation (veth370000a) Stage 3 of 5 (IP Configure Start) scheduled.
Jul 06 13:47:02 NetworkManager[667]: Activation (veth370000a) Stage 2 of 5 (Device Configure) complete.
Jul 06 13:47:02 NetworkManager[667]: Activation (veth370000a) Stage 3 of 5 (IP Configure Start) started...
Jul 06 13:47:02 NetworkManager[667]: (veth370000a): device state change: config -> ip-config (reason 'none') [50 70 0]
Jul 06 13:47:02 NetworkManager[667]: Activation (veth370000a) Stage 3 of 5 (IP Configure Start) complete.
Jul 06 13:47:02 NetworkManager[667]: (veth370000a): device state change: ip-config -> secondaries (reason 'none') [70 90 0]
Jul 06 13:47:02 NetworkManager[667]: (veth370000a): device state change: secondaries -> activated (reason 'none') [90 100 0]
Jul 06 13:47:02 NetworkManager[667]: Activation (veth370000a) successful, device activated.
Jul 06 13:47:02 dbus[689]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service'
Jul 06 13:47:02 dbus[689]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher'
Jul 06 13:47:02 nm-dispatcher[23880]: Dispatching action 'up' for veth370000a
Jul 06 13:47:02 ntpdate[23922]: the NTP socket is in use, exiting
Jul 06 13:47:02 sshd[664]: Received SIGHUP; restarting.
Jul 06 13:47:02 sshd[664]: Server listening on 0.0.0.0 port 22.
Jul 06 13:47:02 sshd[664]: Server listening on :: port 22.
Jul 06 13:47:02 avahi-daemon[688]: Withdrawing workstation service for vethf44825d.
Jul 06 13:47:02 NetworkManager[667]: devices removed (path: /sys/devices/virtual/net/vethf44825d, iface: vethf44825d)
Jul 06 13:47:02 NetworkManager[667]: (veth370000a): link connected
Jul 06 13:47:02 NetworkManager[667]: (docker0): link connected
Jul 06 13:47:02 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth370000a: link becomes ready
Jul 06 13:47:02 kernel: docker0: port 1(veth370000a) entered forwarding state
Jul 06 13:47:02 kernel: docker0: port 1(veth370000a) entered forwarding state
Jul 06 13:47:02 docker[21542]: time="2016-07-06T13:47:02-05:00" level=error msg="containerd: notify OOM events" error="open memory.oom_control: no such file or directory"
Jul 06 13:47:03 avahi-daemon[688]: Joining mDNS multicast group on interface veth370000a.IPv6 with address fe80::fcd1:2cff:fe1f:65c6.
Jul 06 13:47:03 avahi-daemon[688]: New relevant interface veth370000a.IPv6 for mDNS.
Jul 06 13:47:03 avahi-daemon[688]: Registering new address record for fe80::fcd1:2cff:fe1f:65c6 on veth370000a.*.
Jul 06 13:47:05 ntpd[735]: Listen normally on 51 veth370000a fe80::fcd1:2cff:fe1f:65c6 UDP 123
Jul 06 13:47:05 ntpd[735]: peers refreshed
Jul 06 13:47:05 NetworkManager[667]: (vethf44825d): device is virtual, marking as unmanaged
Jul 06 13:47:05 NetworkManager[667]: (vethf44825d): carrier is OFF
Jul 06 13:47:05 NetworkManager[667]: (vethf44825d): new Veth device (driver: 'unknown' ifindex: 582)
Jul 06 13:47:05 NetworkManager[667]: (vethf44825d): exported as /org/freedesktop/NetworkManager/Devices/872
Jul 06 13:47:05 kernel: docker0: port 1(veth370000a) entered disabled state
Jul 06 13:47:06 NetworkManager[667]: (veth370000a): link disconnected (deferring action for 4 seconds)
Jul 06 13:47:06 NetworkManager[667]: devices added (path: /sys/devices/virtual/net/vethf44825d, iface: vethf44825d)
Jul 06 13:47:06 NetworkManager[667]: device added (path: /sys/devices/virtual/net/vethf44825d, iface: vethf44825d): no ifupdown configuration found.
Jul 06 13:47:06 avahi-daemon[688]: Interface veth370000a.IPv6 no longer relevant for mDNS.
Jul 06 13:47:06 avahi-daemon[688]: Leaving mDNS multicast group on interface veth370000a.IPv6 with address fe80::fcd1:2cff:fe1f:65c6.
Jul 06 13:47:06 avahi-daemon[688]: Withdrawing address record for fe80::fcd1:2cff:fe1f:65c6 on veth370000a.
Jul 06 13:47:06 avahi-daemon[688]: Withdrawing workstation service for vethf44825d.
Jul 06 13:47:06 avahi-daemon[688]: Withdrawing workstation service for veth370000a.
Jul 06 13:47:06 kernel: docker0: port 1(veth370000a) entered disabled state
Jul 06 13:47:06 kernel: device veth370000a left promiscuous mode
Jul 06 13:47:06 kernel: docker0: port 1(veth370000a) entered disabled state
Jul 06 13:47:06 NetworkManager[667]: devices removed (path: /sys/devices/virtual/net/vethf44825d, iface: vethf44825d)
Jul 06 13:47:06 NetworkManager[667]: devices removed (path: /sys/devices/virtual/net/veth370000a, iface: veth370000a)
Jul 06 13:47:06 NetworkManager[667]: (veth370000a): device state change: activated -> unmanaged (reason 'removed') [100 10 36]
Jul 06 13:47:06 NetworkManager[667]: (veth370000a): deactivating device (reason 'removed') [36]
Jul 06 13:47:06 NetworkManager[667]: (docker0): failed to detach bridge port veth370000a
Jul 06 13:47:06 NetworkManager[667]: nm_device_get_iface: assertion 'self != NULL' failed
Jul 06 13:47:06 NetworkManager[667]: (veth370000a): released from master (null)
Jul 06 13:47:06 NetworkManager[667]: (docker0): link disconnected (deferring action for 4 seconds)
Jul 06 13:47:06 nm-dispatcher[23880]: Dispatching action 'down' for veth370000a
Jul 06 13:47:08 ntpd[735]: Deleting interface #51 veth370000a, fe80::fcd1:2cff:fe1f:65c6#123, interface stats: received=0, sent=0, dropped=0, active_time=3 secs
Jul 06 13:47:08 ntpd[735]: peers refreshed
Jul 06 13:47:10 NetworkManager[667]: (docker0): link disconnected (calling deferred action)

@mckelvin

This comment has been minimized.

Show comment
Hide comment
@mckelvin

mckelvin Jul 25, 2016

/cfpeng /fkeet Just to make sure, can you please post the content of etc/resolv.conf inside your container.

Also the o/p of sudo iptables -t nat -L -nv on your host. Want to check whether the masquerade rule is there.

Same issue +1. I can't ping 192.168.1.1 (the router). And I checked sudo tcpdump -ni docker0 icmp, sudo tcpdump -ni eth icmp and sudo tcpdump -ni veth7128143 icmp while I'm running ping 192.168.1.1 inside one of my docker container. And I can see packet from my container to router but there's no reply packet from router to the container

@aboch The output of sudo iptables -t nat -L -nv in my case is:

$ sudo iptables -t nat -L -nv
Chain PREROUTING (policy ACCEPT 424 packets, 33408 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain INPUT (policy ACCEPT 13 packets, 804 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 1504 packets, 90379 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain POSTROUTING (policy ACCEPT 1521 packets, 91807 bytes)
 pkts bytes target     prot opt in     out     source               destination

There's no rules there! And the docker daemon is run with --iptables=false in the case above. I tried service docker restart but it didn't work.


Then I modified /etc/default/docker and changed the iptables parameter from --iptables=false (to make ufw happy) to --iptables=true. Then service docker restart again. This time the masquerade rule is back:

$ sudo iptables -t nat -L -nv
Chain PREROUTING (policy ACCEPT 9 packets, 568 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 4 packets, 230 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DOCKER     all  --  *      *       0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 4 packets, 230 bytes)
 pkts bytes target     prot opt in     out     source               destination
    9   568 MASQUERADE  all  --  *      !docker0  172.17.0.0/16        0.0.0.0/0
    0     0 MASQUERADE  tcp  --  *      *       172.17.0.2           172.17.0.2           tcp dpt:443
    0     0 MASQUERADE  tcp  --  *      *       172.17.0.2           172.17.0.2           tcp dpt:80
    0     0 MASQUERADE  tcp  --  *      *       172.17.0.2           172.17.0.2           tcp dpt:22

Chain DOCKER (2 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 RETURN     all  --  docker0 *       0.0.0.0/0            0.0.0.0/0
    0     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            127.0.0.1            tcp dpt:443 to:172.17.0.2:443
    0     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            127.0.0.1            tcp dpt:10080 to:172.17.0.2:80
    0     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:22 to:172.17.0.2:22

I disable the iptables parameter again (changed to--iptables=false) and restart the daemon. Now everything works as expected.

$ sudo iptables -t nat -L -nv
Chain PREROUTING (policy ACCEPT 19 packets, 1675 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 183 packets, 10986 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DOCKER     all  --  *      *       0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 183 packets, 10986 bytes)
 pkts bytes target     prot opt in     out     source               destination
   21  1388 MASQUERADE  all  --  *      !docker0  172.17.0.0/16        0.0.0.0/0

Chain DOCKER (2 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 RETURN     all  --  docker0 *       0.0.0.0/0            0.0.0.0/0

mckelvin commented Jul 25, 2016

/cfpeng /fkeet Just to make sure, can you please post the content of etc/resolv.conf inside your container.

Also the o/p of sudo iptables -t nat -L -nv on your host. Want to check whether the masquerade rule is there.

Same issue +1. I can't ping 192.168.1.1 (the router). And I checked sudo tcpdump -ni docker0 icmp, sudo tcpdump -ni eth icmp and sudo tcpdump -ni veth7128143 icmp while I'm running ping 192.168.1.1 inside one of my docker container. And I can see packet from my container to router but there's no reply packet from router to the container

@aboch The output of sudo iptables -t nat -L -nv in my case is:

$ sudo iptables -t nat -L -nv
Chain PREROUTING (policy ACCEPT 424 packets, 33408 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain INPUT (policy ACCEPT 13 packets, 804 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 1504 packets, 90379 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain POSTROUTING (policy ACCEPT 1521 packets, 91807 bytes)
 pkts bytes target     prot opt in     out     source               destination

There's no rules there! And the docker daemon is run with --iptables=false in the case above. I tried service docker restart but it didn't work.


Then I modified /etc/default/docker and changed the iptables parameter from --iptables=false (to make ufw happy) to --iptables=true. Then service docker restart again. This time the masquerade rule is back:

$ sudo iptables -t nat -L -nv
Chain PREROUTING (policy ACCEPT 9 packets, 568 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 4 packets, 230 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DOCKER     all  --  *      *       0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 4 packets, 230 bytes)
 pkts bytes target     prot opt in     out     source               destination
    9   568 MASQUERADE  all  --  *      !docker0  172.17.0.0/16        0.0.0.0/0
    0     0 MASQUERADE  tcp  --  *      *       172.17.0.2           172.17.0.2           tcp dpt:443
    0     0 MASQUERADE  tcp  --  *      *       172.17.0.2           172.17.0.2           tcp dpt:80
    0     0 MASQUERADE  tcp  --  *      *       172.17.0.2           172.17.0.2           tcp dpt:22

Chain DOCKER (2 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 RETURN     all  --  docker0 *       0.0.0.0/0            0.0.0.0/0
    0     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            127.0.0.1            tcp dpt:443 to:172.17.0.2:443
    0     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            127.0.0.1            tcp dpt:10080 to:172.17.0.2:80
    0     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:22 to:172.17.0.2:22

I disable the iptables parameter again (changed to--iptables=false) and restart the daemon. Now everything works as expected.

$ sudo iptables -t nat -L -nv
Chain PREROUTING (policy ACCEPT 19 packets, 1675 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 183 packets, 10986 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DOCKER     all  --  *      *       0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 183 packets, 10986 bytes)
 pkts bytes target     prot opt in     out     source               destination
   21  1388 MASQUERADE  all  --  *      !docker0  172.17.0.0/16        0.0.0.0/0

Chain DOCKER (2 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 RETURN     all  --  docker0 *       0.0.0.0/0            0.0.0.0/0
@mmoryakov

This comment has been minimized.

Show comment
Hide comment
@mmoryakov

mmoryakov Aug 13, 2016

Get same issue.
$docker version
Client:
Version: 1.12.0
API version: 1.24
Go version: go1.6.3
Git commit: 8eab29e
Built: Thu Jul 28 22:11:10 2016
OS/Arch: linux/amd64

Server:
Version: 1.12.0
API version: 1.24
Go version: go1.6.3
Git commit: 8eab29e
Built: Thu Jul 28 22:11:10 2016
OS/Arch: linux/amd64

$ uname -a
Linux ubuntu 4.4.0-34-generic

But in my case nothing helps, restart service, reboot system, reinstall docker, add new bridges nothing...

I asked the question there http://askubuntu.com/questions/811895/ubuntu-16-04-iptables-on-postrouting-do-not-recognize-docker0-bridge
but when I found this this place I think that issue is with the docker bridge.

mmoryakov commented Aug 13, 2016

Get same issue.
$docker version
Client:
Version: 1.12.0
API version: 1.24
Go version: go1.6.3
Git commit: 8eab29e
Built: Thu Jul 28 22:11:10 2016
OS/Arch: linux/amd64

Server:
Version: 1.12.0
API version: 1.24
Go version: go1.6.3
Git commit: 8eab29e
Built: Thu Jul 28 22:11:10 2016
OS/Arch: linux/amd64

$ uname -a
Linux ubuntu 4.4.0-34-generic

But in my case nothing helps, restart service, reboot system, reinstall docker, add new bridges nothing...

I asked the question there http://askubuntu.com/questions/811895/ubuntu-16-04-iptables-on-postrouting-do-not-recognize-docker0-bridge
but when I found this this place I think that issue is with the docker bridge.

@martinkouba

This comment has been minimized.

Show comment
Hide comment
@martinkouba

martinkouba Sep 17, 2016

Had the same trouble with building nginx:alpine

What fixed it for me was that the build process uses docker0 and not the bridge.
The clue was in the log entries of shorewall in /var/log/syslog
Setting up network forwarding for docker0 fixed it

martinkouba commented Sep 17, 2016

Had the same trouble with building nginx:alpine

What fixed it for me was that the build process uses docker0 and not the bridge.
The clue was in the log entries of shorewall in /var/log/syslog
Setting up network forwarding for docker0 fixed it

@DavidObando

This comment has been minimized.

Show comment
Hide comment
@DavidObando

DavidObando Oct 18, 2016

Just in case it helps folks running docker in an Ubuntu VM under Hyper-V (Windows) facing this same issue, it can be worked around by specifying the dns to use.

  • run nm-tool on the Ubuntu VM and pick any one of the DNS addresses in the output
  • sudo vi /etc/default/docker and add the DNS ip to DOCKER_OPTS parameter.
  • sudo service docker restart

DavidObando commented Oct 18, 2016

Just in case it helps folks running docker in an Ubuntu VM under Hyper-V (Windows) facing this same issue, it can be worked around by specifying the dns to use.

  • run nm-tool on the Ubuntu VM and pick any one of the DNS addresses in the output
  • sudo vi /etc/default/docker and add the DNS ip to DOCKER_OPTS parameter.
  • sudo service docker restart
@lukewaters

This comment has been minimized.

Show comment
Hide comment
@lukewaters

lukewaters Oct 21, 2016

@DavidObando

A clarification on your scenario, did you spin up an ubuntu VM then independently install docker on that machine, or are you using the VM that is automatically deployed and configured when you install the docker tools for windows?

I'm doing the latter (automatic config) and I'm running into this network issue but when I try to connect to MobyLinuxVM through HyperV I'm unable to interact with the VM

Update, reinstalling docker for windows fixed things...

lukewaters commented Oct 21, 2016

@DavidObando

A clarification on your scenario, did you spin up an ubuntu VM then independently install docker on that machine, or are you using the VM that is automatically deployed and configured when you install the docker tools for windows?

I'm doing the latter (automatic config) and I'm running into this network issue but when I try to connect to MobyLinuxVM through HyperV I'm unable to interact with the VM

Update, reinstalling docker for windows fixed things...

@DavidObando

This comment has been minimized.

Show comment
Hide comment
@DavidObando

DavidObando Oct 21, 2016

Hey @lukewaters! Glad to hear the issue went away for you.

I'm talking about an Ubuntu VM I set up myself (in Hyper-V) and within this Ubuntu VM I'm running both the docker daemon and the docker client (Linux).

DavidObando commented Oct 21, 2016

Hey @lukewaters! Glad to hear the issue went away for you.

I'm talking about an Ubuntu VM I set up myself (in Hyper-V) and within this Ubuntu VM I'm running both the docker daemon and the docker client (Linux).

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Oct 29, 2016

Member

I'm going to close and lock this issue, because this issue has become a collection of a wide range of DNS and network related issues. Having all these issues collected in a long thread makes it very difficult to look into (some reports were resolved, others were reported a long time ago, and may no longer be relevant).

Also note that if you're having issues on Docker for Mac, please report through https://github.com/docker/for-mac/issues, as those issues can be specific to Docker for Mac (and not the "engine").

If you're still having this issue, please open a new issue with all the relevant information, so that it can be looked into in detail.

Thanks in advance for doing so, and apologies for having re-submit your issue if you're still having this.

Member

thaJeztah commented Oct 29, 2016

I'm going to close and lock this issue, because this issue has become a collection of a wide range of DNS and network related issues. Having all these issues collected in a long thread makes it very difficult to look into (some reports were resolved, others were reported a long time ago, and may no longer be relevant).

Also note that if you're having issues on Docker for Mac, please report through https://github.com/docker/for-mac/issues, as those issues can be specific to Docker for Mac (and not the "engine").

If you're still having this issue, please open a new issue with all the relevant information, so that it can be looked into in detail.

Thanks in advance for doing so, and apologies for having re-submit your issue if you're still having this.

@thaJeztah thaJeztah closed this Oct 29, 2016

@moby moby locked and limited conversation to collaborators Oct 29, 2016

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.