New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exposed ports are only accessible with --net=host #13914

Closed
Jaykah opened this Issue Jun 12, 2015 · 51 comments

Comments

Projects
None yet
@Jaykah

Jaykah commented Jun 12, 2015

I have encountered a strange issue that does not allow me to connect to an exposed container port:

docker run -d -p 2221:222 mycontainer:latest
docker0   Link encap:Ethernet
          inet addr:172.17.42.1  Bcast:0.0.0.0  Mask:255.255.0.0

eth0      Link encap:Ethernet 
          inet addr:10.0.1.40  Bcast:10.0.1.63  Mask:255.255.255.224

The port is there and listening

netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1507/sshd
tcp        0      0 10.0.1.40:16001         0.0.0.0:*               LISTEN      1136/python
tcp6       0      0 :::22                   :::*                    LISTEN      1507/sshd
tcp6       0      0 :::2221                 :::*                    LISTEN      13130/docker-proxy

Telnet is able to connect, but only to localhost

telnet localhost 2221
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2
telnet 10.0.1.40 2221
Trying 10.0.1.40...

Iptables:


iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
ACCEPT     udp  --  anywhere             anywhere             udp dpt:bootpc

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
DOCKER     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Chain DOCKER (1 references)
target     prot opt source               destination
...
ACCEPT     tcp  --  anywhere             172.17.0.2           tcp dpt:222
...

However, if I run docker with --net=host, I am successfully able to connect to the port.

Client version: 1.6.2
Client API version: 1.18
Go version (client): go1.4.2
Git commit (client): 7c8fca2
OS/Arch (client): linux/amd64
Server version: 1.6.2
Server API version: 1.18
Go version (server): go1.4.2
Git commit (server): 7c8fca2
OS/Arch (server): linux/amd64
Containers: 2
Images: 900
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 904
 Dirperm1 Supported: false
Execution Driver: native-0.2
Kernel Version: 3.13.0-54-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 4
Total Memory: 6.807 GiB
Name: Logstash
ID: MLLQ:HLPF:FZJV:FIDP:5PGM:UAZ6:SC72:P5BM:54AZ:ZCAF:MV5C:TZ72
WARNING: The Auth config file is empty
WARNING: No swap limit support

Linux Logstash 3.13.0-54-generic #91-Ubuntu SMP Tue May 26 19:15:08 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux hosted on Azure

@GordonTheTurtle

This comment has been minimized.

Show comment
Hide comment
@GordonTheTurtle

GordonTheTurtle Jun 12, 2015

Hi!

Please read this important information about creating issues.

If you are reporting a new issue, make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead.

If you suspect your issue is a bug, please edit your issue description to include the BUG REPORT INFORMATION shown below. If you fail to provide this information within 7 days, we cannot debug your issue and will close it. We will, however, reopen it if you later provide the information.

This is an automated, informational response.

Thank you.

For more information about reporting issues, see https://github.com/docker/docker/blob/master/CONTRIBUTING.md#reporting-other-issues


BUG REPORT INFORMATION

Use the commands below to provide key information from your environment:

docker version:
docker info:
uname -a:

Provide additional environment details (AWS, VirtualBox, physical, etc.):

List the steps to reproduce the issue:
1.
2.
3.

Describe the results you received:

Describe the results you expected:

Provide additional info you think is important:

----------END REPORT ---------

#ENEEDMOREINFO

GordonTheTurtle commented Jun 12, 2015

Hi!

Please read this important information about creating issues.

If you are reporting a new issue, make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead.

If you suspect your issue is a bug, please edit your issue description to include the BUG REPORT INFORMATION shown below. If you fail to provide this information within 7 days, we cannot debug your issue and will close it. We will, however, reopen it if you later provide the information.

This is an automated, informational response.

Thank you.

For more information about reporting issues, see https://github.com/docker/docker/blob/master/CONTRIBUTING.md#reporting-other-issues


BUG REPORT INFORMATION

Use the commands below to provide key information from your environment:

docker version:
docker info:
uname -a:

Provide additional environment details (AWS, VirtualBox, physical, etc.):

List the steps to reproduce the issue:
1.
2.
3.

Describe the results you received:

Describe the results you expected:

Provide additional info you think is important:

----------END REPORT ---------

#ENEEDMOREINFO

@coolljt0725

This comment has been minimized.

Show comment
Hide comment
@coolljt0725

coolljt0725 Jun 13, 2015

Contributor

1 Do you set --ip on docker daemon start? This will set the default ip when binding container ports
2 Can you show the result of docker port YOUR_CONTAINER or just the result of docker ps?

Contributor

coolljt0725 commented Jun 13, 2015

1 Do you set --ip on docker daemon start? This will set the default ip when binding container ports
2 Can you show the result of docker port YOUR_CONTAINER or just the result of docker ps?

@Jaykah

This comment has been minimized.

Show comment
Hide comment
@Jaykah

Jaykah Jun 16, 2015

  1. no, but adding it didn't help (tried both 0.0.0.0 and the local IP of the node)

CONTAINER ID        IMAGE                                     COMMAND                CREATED             STATUS              PORTS                                                                                                       NAMES
ed2924a6e2c6        integrity2:latest   "/bin/sh -c '/usr/bi   57 seconds ago      Up 53 seconds  0.0.0.0:2221->222/tcp   tender_franklin

Jaykah commented Jun 16, 2015

  1. no, but adding it didn't help (tried both 0.0.0.0 and the local IP of the node)

CONTAINER ID        IMAGE                                     COMMAND                CREATED             STATUS              PORTS                                                                                                       NAMES
ed2924a6e2c6        integrity2:latest   "/bin/sh -c '/usr/bi   57 seconds ago      Up 53 seconds  0.0.0.0:2221->222/tcp   tender_franklin
@stratosgear

This comment has been minimized.

Show comment
Hide comment
@stratosgear

stratosgear Aug 20, 2015

I'm also facing this problem, just today after rebooting.

Docker file:

FROM mongo:3.0.4
EXPOSE 28017 27017
CMD mongod --httpinterface --rest

Built as:

docker build -t pkh/mongodb-dev .

executing as:

docker run -ti --name mongodb-dev -p 28017:28017 -p 27017:27017 pkh/mongodb-dev

exposes ports:

[me:/home/me] $ sudo netstat -tulpn
[sudo] password for me: 
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:17500           0.0.0.0:*               LISTEN      5370/dropbox        
tcp        0      0 127.0.0.1:17600         0.0.0.0:*               LISTEN      5370/dropbox        
tcp        0      0 127.0.0.1:17603         0.0.0.0:*               LISTEN      5370/dropbox        
tcp        0      0 127.0.0.1:631           0.0.0.0:*               LISTEN      1338/cupsd          
tcp        0      0 127.0.0.1:5432          0.0.0.0:*               LISTEN      1981/postgres       
tcp6       0      0 :::27017                :::*                    LISTEN      18227/docker-proxy  
tcp6       0      0 :::28017                :::*                    LISTEN      18220/docker-proxy  
tcp6       0      0 :::22                   :::*                    LISTEN      1/init              
tcp6       0      0 ::1:631                 :::*                    LISTEN      1338/cupsd          
tcp6       0      0 ::1:5432                :::*                    LISTEN      1981/postgres       
udp        0      0 192.168.2.230:46792     0.0.0.0:*                           13642/dleyna-render 
udp        0      0 239.255.255.250:1900    0.0.0.0:*                           13642/dleyna-render 
udp        0      0 192.168.2.230:1900      0.0.0.0:*                           13642/dleyna-render 
udp        0      0 239.255.255.250:1900    0.0.0.0:*                           13642/dleyna-render 
udp        0      0 127.0.0.1:1900          0.0.0.0:*                           13642/dleyna-render 
udp        0      0 0.0.0.0:56160           0.0.0.0:*                           3130/dhclient       
udp        0      0 127.0.0.1:60711         0.0.0.0:*                           13642/dleyna-render 
udp        0      0 0.0.0.0:68              0.0.0.0:*                           3130/dhclient       
udp        0      0 192.168.2.230:123       0.0.0.0:*                           1963/ntpd           
udp        0      0 127.0.0.1:123           0.0.0.0:*                           1963/ntpd           
udp        0      0 0.0.0.0:123             0.0.0.0:*                           1963/ntpd           
udp        0      0 0.0.0.0:17500           0.0.0.0:*                           5370/dropbox        
udp        0      0 0.0.0.0:5353            0.0.0.0:*                           5643/google-chrome- 
udp6       0      0 :::64364                :::*                                3130/dhclient       
udp6       0      0 fe80::c838:37ff:fef:123 :::*                                1963/ntpd           
udp6       0      0 fe80::42:8aff:fed2::123 :::*                                1963/ntpd           
udp6       0      0 fe80::b6b6:76ff:feb:123 :::*                                1963/ntpd           
udp6       0      0 ::1:123                 :::*                                1963/ntpd           
udp6       0      0 :::123                  :::*                                1963/ntpd        

But I cannot connect to mongodb on localhost:27017

But when I:

docker run -ti --net=host --name mongodb-dev -p 28017:28017 -p 27017:27017 pkh/mongodb-dev

(note the added --net=host) it exposes the ports as:

[me:/home/me] $ sudo netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:17500           0.0.0.0:*               LISTEN      5370/dropbox        
tcp        0      0 127.0.0.1:17600         0.0.0.0:*               LISTEN      5370/dropbox        
tcp        0      0 127.0.0.1:17603         0.0.0.0:*               LISTEN      5370/dropbox        
tcp        0      0 0.0.0.0:27017           0.0.0.0:*               LISTEN      18654/mongod        
tcp        0      0 0.0.0.0:28017           0.0.0.0:*               LISTEN      18654/mongod        
tcp        0      0 127.0.0.1:631           0.0.0.0:*               LISTEN      1338/cupsd          
tcp        0      0 127.0.0.1:5432          0.0.0.0:*               LISTEN      1981/postgres       
tcp6       0      0 :::22                   :::*                    LISTEN      1/init              
tcp6       0      0 ::1:631                 :::*                    LISTEN      1338/cupsd          
tcp6       0      0 ::1:5432                :::*                    LISTEN      1981/postgres       
udp        0      0 192.168.2.230:46792     0.0.0.0:*                           13642/dleyna-render 
udp        0      0 239.255.255.250:1900    0.0.0.0:*                           13642/dleyna-render 
udp        0      0 192.168.2.230:1900      0.0.0.0:*                           13642/dleyna-render 
udp        0      0 239.255.255.250:1900    0.0.0.0:*                           13642/dleyna-render 
udp        0      0 127.0.0.1:1900          0.0.0.0:*                           13642/dleyna-render 
udp        0      0 0.0.0.0:56160           0.0.0.0:*                           3130/dhclient       
udp        0      0 127.0.0.1:60711         0.0.0.0:*                           13642/dleyna-render 
udp        0      0 0.0.0.0:68              0.0.0.0:*                           3130/dhclient       
udp        0      0 192.168.2.230:123       0.0.0.0:*                           1963/ntpd           
udp        0      0 127.0.0.1:123           0.0.0.0:*                           1963/ntpd           
udp        0      0 0.0.0.0:123             0.0.0.0:*                           1963/ntpd           
udp        0      0 0.0.0.0:17500           0.0.0.0:*                           5370/dropbox        
udp        0      0 0.0.0.0:5353            0.0.0.0:*                           5643/google-chrome- 
udp6       0      0 :::64364                :::*                                3130/dhclient       
udp6       0      0 fe80::b6b6:76ff:feb:123 :::*                                1963/ntpd           
udp6       0      0 ::1:123                 :::*                                1963/ntpd           
udp6       0      0 :::123                  :::*                                1963/ntpd           

(note that 27017 is now bound to mongodb and not docker-proxy) then it works as expected.

Up until yesterday it was working without it. Where do I start looking what has changed? I am sure I've run some system upgrades (running on Arch) but is it something from Docker that changed?

Bug Report Info

Docker Version:

[me:/home/me] $ docker version
Client:
 Version:      1.8.1
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   d12ea79
 Built:        Sat Aug 15 17:29:10 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.8.1
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   d12ea79
 Built:        Sat Aug 15 17:29:10 UTC 2015
 OS/Arch:      linux/amd64

Docker Info:

 [me:/home/me] 130 $ docker info
Containers: 11
Images: 202
Storage Driver: devicemapper
 Pool Name: docker-8:17-1442261-pool
 Pool Blocksize: 65.54 kB
 Backing Filesystem: extfs
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 7.842 GB
 Data Space Total: 107.4 GB
 Data Space Available: 34.97 GB
 Metadata Space Used: 11.59 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.136 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.102 (2015-07-07)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.1.5-1-ARCH
Operating System: ntergos
ANSI_COLOR=
CPUs: 8
Total Memory: 7.609 GiB
Name: firefly
ID: P653:YXK4:N32H:UPUN:UKVJ:LTCT:2XKU:T22Z:ZOQX:PLV3:3262:WQEQ

uname -a:

[me:/home/me] 130 $ uname -a
Linux firefly 4.1.5-1-ARCH #1 SMP PREEMPT Tue Aug 11 15:41:14 CEST 2015 x86_64 GNU/Linux

stratosgear commented Aug 20, 2015

I'm also facing this problem, just today after rebooting.

Docker file:

FROM mongo:3.0.4
EXPOSE 28017 27017
CMD mongod --httpinterface --rest

Built as:

docker build -t pkh/mongodb-dev .

executing as:

docker run -ti --name mongodb-dev -p 28017:28017 -p 27017:27017 pkh/mongodb-dev

exposes ports:

[me:/home/me] $ sudo netstat -tulpn
[sudo] password for me: 
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:17500           0.0.0.0:*               LISTEN      5370/dropbox        
tcp        0      0 127.0.0.1:17600         0.0.0.0:*               LISTEN      5370/dropbox        
tcp        0      0 127.0.0.1:17603         0.0.0.0:*               LISTEN      5370/dropbox        
tcp        0      0 127.0.0.1:631           0.0.0.0:*               LISTEN      1338/cupsd          
tcp        0      0 127.0.0.1:5432          0.0.0.0:*               LISTEN      1981/postgres       
tcp6       0      0 :::27017                :::*                    LISTEN      18227/docker-proxy  
tcp6       0      0 :::28017                :::*                    LISTEN      18220/docker-proxy  
tcp6       0      0 :::22                   :::*                    LISTEN      1/init              
tcp6       0      0 ::1:631                 :::*                    LISTEN      1338/cupsd          
tcp6       0      0 ::1:5432                :::*                    LISTEN      1981/postgres       
udp        0      0 192.168.2.230:46792     0.0.0.0:*                           13642/dleyna-render 
udp        0      0 239.255.255.250:1900    0.0.0.0:*                           13642/dleyna-render 
udp        0      0 192.168.2.230:1900      0.0.0.0:*                           13642/dleyna-render 
udp        0      0 239.255.255.250:1900    0.0.0.0:*                           13642/dleyna-render 
udp        0      0 127.0.0.1:1900          0.0.0.0:*                           13642/dleyna-render 
udp        0      0 0.0.0.0:56160           0.0.0.0:*                           3130/dhclient       
udp        0      0 127.0.0.1:60711         0.0.0.0:*                           13642/dleyna-render 
udp        0      0 0.0.0.0:68              0.0.0.0:*                           3130/dhclient       
udp        0      0 192.168.2.230:123       0.0.0.0:*                           1963/ntpd           
udp        0      0 127.0.0.1:123           0.0.0.0:*                           1963/ntpd           
udp        0      0 0.0.0.0:123             0.0.0.0:*                           1963/ntpd           
udp        0      0 0.0.0.0:17500           0.0.0.0:*                           5370/dropbox        
udp        0      0 0.0.0.0:5353            0.0.0.0:*                           5643/google-chrome- 
udp6       0      0 :::64364                :::*                                3130/dhclient       
udp6       0      0 fe80::c838:37ff:fef:123 :::*                                1963/ntpd           
udp6       0      0 fe80::42:8aff:fed2::123 :::*                                1963/ntpd           
udp6       0      0 fe80::b6b6:76ff:feb:123 :::*                                1963/ntpd           
udp6       0      0 ::1:123                 :::*                                1963/ntpd           
udp6       0      0 :::123                  :::*                                1963/ntpd        

But I cannot connect to mongodb on localhost:27017

But when I:

docker run -ti --net=host --name mongodb-dev -p 28017:28017 -p 27017:27017 pkh/mongodb-dev

(note the added --net=host) it exposes the ports as:

[me:/home/me] $ sudo netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:17500           0.0.0.0:*               LISTEN      5370/dropbox        
tcp        0      0 127.0.0.1:17600         0.0.0.0:*               LISTEN      5370/dropbox        
tcp        0      0 127.0.0.1:17603         0.0.0.0:*               LISTEN      5370/dropbox        
tcp        0      0 0.0.0.0:27017           0.0.0.0:*               LISTEN      18654/mongod        
tcp        0      0 0.0.0.0:28017           0.0.0.0:*               LISTEN      18654/mongod        
tcp        0      0 127.0.0.1:631           0.0.0.0:*               LISTEN      1338/cupsd          
tcp        0      0 127.0.0.1:5432          0.0.0.0:*               LISTEN      1981/postgres       
tcp6       0      0 :::22                   :::*                    LISTEN      1/init              
tcp6       0      0 ::1:631                 :::*                    LISTEN      1338/cupsd          
tcp6       0      0 ::1:5432                :::*                    LISTEN      1981/postgres       
udp        0      0 192.168.2.230:46792     0.0.0.0:*                           13642/dleyna-render 
udp        0      0 239.255.255.250:1900    0.0.0.0:*                           13642/dleyna-render 
udp        0      0 192.168.2.230:1900      0.0.0.0:*                           13642/dleyna-render 
udp        0      0 239.255.255.250:1900    0.0.0.0:*                           13642/dleyna-render 
udp        0      0 127.0.0.1:1900          0.0.0.0:*                           13642/dleyna-render 
udp        0      0 0.0.0.0:56160           0.0.0.0:*                           3130/dhclient       
udp        0      0 127.0.0.1:60711         0.0.0.0:*                           13642/dleyna-render 
udp        0      0 0.0.0.0:68              0.0.0.0:*                           3130/dhclient       
udp        0      0 192.168.2.230:123       0.0.0.0:*                           1963/ntpd           
udp        0      0 127.0.0.1:123           0.0.0.0:*                           1963/ntpd           
udp        0      0 0.0.0.0:123             0.0.0.0:*                           1963/ntpd           
udp        0      0 0.0.0.0:17500           0.0.0.0:*                           5370/dropbox        
udp        0      0 0.0.0.0:5353            0.0.0.0:*                           5643/google-chrome- 
udp6       0      0 :::64364                :::*                                3130/dhclient       
udp6       0      0 fe80::b6b6:76ff:feb:123 :::*                                1963/ntpd           
udp6       0      0 ::1:123                 :::*                                1963/ntpd           
udp6       0      0 :::123                  :::*                                1963/ntpd           

(note that 27017 is now bound to mongodb and not docker-proxy) then it works as expected.

Up until yesterday it was working without it. Where do I start looking what has changed? I am sure I've run some system upgrades (running on Arch) but is it something from Docker that changed?

Bug Report Info

Docker Version:

[me:/home/me] $ docker version
Client:
 Version:      1.8.1
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   d12ea79
 Built:        Sat Aug 15 17:29:10 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.8.1
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   d12ea79
 Built:        Sat Aug 15 17:29:10 UTC 2015
 OS/Arch:      linux/amd64

Docker Info:

 [me:/home/me] 130 $ docker info
Containers: 11
Images: 202
Storage Driver: devicemapper
 Pool Name: docker-8:17-1442261-pool
 Pool Blocksize: 65.54 kB
 Backing Filesystem: extfs
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 7.842 GB
 Data Space Total: 107.4 GB
 Data Space Available: 34.97 GB
 Metadata Space Used: 11.59 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.136 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.102 (2015-07-07)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.1.5-1-ARCH
Operating System: ntergos
ANSI_COLOR=
CPUs: 8
Total Memory: 7.609 GiB
Name: firefly
ID: P653:YXK4:N32H:UPUN:UKVJ:LTCT:2XKU:T22Z:ZOQX:PLV3:3262:WQEQ

uname -a:

[me:/home/me] 130 $ uname -a
Linux firefly 4.1.5-1-ARCH #1 SMP PREEMPT Tue Aug 11 15:41:14 CEST 2015 x86_64 GNU/Linux
@aboch

This comment has been minimized.

Show comment
Hide comment
@aboch

aboch Aug 20, 2015

Contributor

@Jaykah @stratosgear At least with docker 1.8.1, If you follow @coolljt0725 advice ( restarting the daemon with --ip=<your host interface ip>) things should work.

In the netstat o/p you would see the expose port binded to the ipv4 address you specified.

It seems to me the real issue is that if --ip is not specified, docker will bind the port to the zero IPv6 address, instead of to the zero IPv4 address or to both.

Contributor

aboch commented Aug 20, 2015

@Jaykah @stratosgear At least with docker 1.8.1, If you follow @coolljt0725 advice ( restarting the daemon with --ip=<your host interface ip>) things should work.

In the netstat o/p you would see the expose port binded to the ipv4 address you specified.

It seems to me the real issue is that if --ip is not specified, docker will bind the port to the zero IPv6 address, instead of to the zero IPv4 address or to both.

@stratosgear

This comment has been minimized.

Show comment
Hide comment
@stratosgear

stratosgear Aug 20, 2015

Being a little more specific:

Adding --ip=127.0.0.1 in the ExecStart line of /etc/systemd/system/multi-user.target.wants/docker.service solved the problem ( no more --net=host when i run my docker images)

BTW, I'm running Arch, no idea where changes to the Docker daemon should take place in other distros

stratosgear commented Aug 20, 2015

Being a little more specific:

Adding --ip=127.0.0.1 in the ExecStart line of /etc/systemd/system/multi-user.target.wants/docker.service solved the problem ( no more --net=host when i run my docker images)

BTW, I'm running Arch, no idea where changes to the Docker daemon should take place in other distros

@aboch

This comment has been minimized.

Show comment
Hide comment
@aboch

aboch Aug 20, 2015

Contributor

Need to rectify my comment, the netstat ipv6 binding notation does not seem to be an issue as per comments in docker/issues/2174

Contributor

aboch commented Aug 20, 2015

Need to rectify my comment, the netstat ipv6 binding notation does not seem to be an issue as per comments in docker/issues/2174

@raine

This comment has been minimized.

Show comment
Hide comment
@raine

raine Aug 21, 2015

I think this is the way you set the --ip flag when using docker-machine.

docker-machine create --driver virtualbox --engine-opt ip=127.0.0.1 dev

raine commented Aug 21, 2015

I think this is the way you set the --ip flag when using docker-machine.

docker-machine create --driver virtualbox --engine-opt ip=127.0.0.1 dev
@phobologic

This comment has been minimized.

Show comment
Hide comment
@phobologic

phobologic Aug 29, 2015

I'm running into this as well, after upgrading to Docker version 1.8.1, build d12ea79

The strange thing is that on two of my hosts it works fine, but on another it's having this issue.

(My app runs on 8080)

On a good host:

ubuntu@ip-10-128-9-171:~$ docker --version
Docker version 1.8.1, build d12ea79
ubuntu@ip-10-128-9-171:~$ sudo netstat -tulpn | grep :8080
tcp6       0      0 :::8080                 :::*                    LISTEN      13891/docker-proxy
ubuntu@ip-10-128-9-171:~$ telnet 10.128.9.171 8080
Trying 10.128.9.171...
Connected to 10.128.9.171.
Escape character is '^]'.
GET /health HTTP/1.0

HTTP/1.0 200 OK
Date: Sat, 29 Aug 2015 20:14:01 GMT
Content-Length: 0
Content-Type: text/plain; charset=utf-8

Connection closed by foreign host.
ubuntu@ip-10-128-9-171:~$

On a bad host:

ubuntu@ip-10-128-16-9:~$ docker --version
Docker version 1.8.1, build d12ea79
ubuntu@ip-10-128-16-9:~$ sudo netstat -tulpn | grep :8080
tcp6       0      0 :::8080                 :::*                    LISTEN      21440/docker-proxy
ubuntu@ip-10-128-16-9:~$ telnet 10.128.16.9 8080
Trying 10.128.16.9...
telnet: Unable to connect to remote host: No route to host
ubuntu@ip-10-128-16-9:~$

phobologic commented Aug 29, 2015

I'm running into this as well, after upgrading to Docker version 1.8.1, build d12ea79

The strange thing is that on two of my hosts it works fine, but on another it's having this issue.

(My app runs on 8080)

On a good host:

ubuntu@ip-10-128-9-171:~$ docker --version
Docker version 1.8.1, build d12ea79
ubuntu@ip-10-128-9-171:~$ sudo netstat -tulpn | grep :8080
tcp6       0      0 :::8080                 :::*                    LISTEN      13891/docker-proxy
ubuntu@ip-10-128-9-171:~$ telnet 10.128.9.171 8080
Trying 10.128.9.171...
Connected to 10.128.9.171.
Escape character is '^]'.
GET /health HTTP/1.0

HTTP/1.0 200 OK
Date: Sat, 29 Aug 2015 20:14:01 GMT
Content-Length: 0
Content-Type: text/plain; charset=utf-8

Connection closed by foreign host.
ubuntu@ip-10-128-9-171:~$

On a bad host:

ubuntu@ip-10-128-16-9:~$ docker --version
Docker version 1.8.1, build d12ea79
ubuntu@ip-10-128-16-9:~$ sudo netstat -tulpn | grep :8080
tcp6       0      0 :::8080                 :::*                    LISTEN      21440/docker-proxy
ubuntu@ip-10-128-16-9:~$ telnet 10.128.16.9 8080
Trying 10.128.16.9...
telnet: Unable to connect to remote host: No route to host
ubuntu@ip-10-128-16-9:~$
@phobologic

This comment has been minimized.

Show comment
Hide comment
@phobologic

phobologic Aug 29, 2015

I've tried to fix this a few ways - one is using --ip=0.0.0.0 and another using --ip=127.0.0.1. The latter isn't an actual fix, since it seems that the ports are only bound to localhost, which isn't what I want. Do we now need to provide --ip=<ip of eth0> to make this work everytime? That doesn't seem like a good solution either, because it'll mean that it won't bind to 127.0.0.1 as well. I'd really like to be able to bind to 0.0.0.0 as it used to work before 1.8.

phobologic commented Aug 29, 2015

I've tried to fix this a few ways - one is using --ip=0.0.0.0 and another using --ip=127.0.0.1. The latter isn't an actual fix, since it seems that the ports are only bound to localhost, which isn't what I want. Do we now need to provide --ip=<ip of eth0> to make this work everytime? That doesn't seem like a good solution either, because it'll mean that it won't bind to 127.0.0.1 as well. I'd really like to be able to bind to 0.0.0.0 as it used to work before 1.8.

@phobologic

This comment has been minimized.

Show comment
Hide comment
@phobologic

phobologic Aug 30, 2015

So I think I've found at least partially what is causing this, though not what is causing what is causing this. It looks like the issue has to do with poor old NAT rules not being removed in some case. My current guess is that it's when the docker daemon is restarted, though I haven't yet been able to verify that.

Anyway, I downgraded my docker to docker 1.7.1 (which was not super simple with ubuntu packages - had to go back to lxc-docker from the old Ubuntu repo) and I still saw this problem popping up. That's when I started digging into how docker manages to move the traffic to the container when a port is exposed.

On a host where the container was running (and exporting to port 8080 on the host) where I couldn't connect, I took a look and saw this in the NAT table:

root@ip-10-128-17-123:/var/log/upstart# iptables -L -t nat --line-numbers
Chain PREROUTING (policy ACCEPT)
num  target     prot opt source               destination
1    DOCKER     all  --  anywhere             anywhere             ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination
1    DOCKER     all  --  anywhere            !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
num  target     prot opt source               destination
1    MASQUERADE  all  --  ip-172-17-0-0.ec2.internal/16  anywhere
2    MASQUERADE  udp  --  ip-172-17-0-6.ec2.internal  ip-172-17-0-6.ec2.internal  udp dpt:syslog
3    MASQUERADE  udp  --  ip-172-17-0-7.ec2.internal  ip-172-17-0-7.ec2.internal  udp dpt:syslog
4    MASQUERADE  tcp  --  ip-172-17-0-10.ec2.internal  ip-172-17-0-10.ec2.internal  tcp dpt:http-alt
5    MASQUERADE  tcp  --  ip-172-17-0-12.ec2.internal  ip-172-17-0-12.ec2.internal  tcp dpt:http-alt

Chain DOCKER (2 references)
num  target     prot opt source               destination
1    DNAT       udp  --  anywhere             anywhere             udp dpt:syslog to:172.17.0.6:514
2    DNAT       udp  --  anywhere             anywhere             udp dpt:11514 to:172.17.0.7:514
3    DNAT       tcp  --  anywhere             anywhere             tcp dpt:http-alt to:172.17.0.9:8080
4    DNAT       tcp  --  anywhere             anywhere             tcp dpt:http-alt to:172.17.0.10:8080
5    DNAT       tcp  --  anywhere             anywhere             tcp dpt:http-alt to:172.17.0.12:8080

ip-172-17-0-10 and ip-172-17-0-9 were both the IPs of a previous attempt at running the container that no longer exist (again, not sure why this is). I removed all rules referring to them, and I could then connect to my container from the host using any address on that host. Here's my NAT table after the fact when things were working once more:

root@ip-10-128-17-123:/var/log/upstart# iptables -t nat -D DOCKER 3
root@ip-10-128-17-123:/var/log/upstart# iptables -t nat -D DOCKER 3
root@ip-10-128-17-123:/var/log/upstart# iptables -L -t nat --line-numbers
Chain PREROUTING (policy ACCEPT)
num  target     prot opt source               destination
1    DOCKER     all  --  anywhere             anywhere             ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination
1    DOCKER     all  --  anywhere            !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
num  target     prot opt source               destination
1    MASQUERADE  all  --  ip-172-17-0-0.ec2.internal/16  anywhere
2    MASQUERADE  udp  --  ip-172-17-0-6.ec2.internal  ip-172-17-0-6.ec2.internal  udp dpt:syslog
3    MASQUERADE  udp  --  ip-172-17-0-7.ec2.internal  ip-172-17-0-7.ec2.internal  udp dpt:syslog
4    MASQUERADE  tcp  --  ip-172-17-0-10.ec2.internal  ip-172-17-0-10.ec2.internal  tcp dpt:http-alt
5    MASQUERADE  tcp  --  ip-172-17-0-12.ec2.internal  ip-172-17-0-12.ec2.internal  tcp dpt:http-alt

Chain DOCKER (2 references)
num  target     prot opt source               destination
1    DNAT       udp  --  anywhere             anywhere             udp dpt:syslog to:172.17.0.6:514
2    DNAT       udp  --  anywhere             anywhere             udp dpt:11514 to:172.17.0.7:514
3    DNAT       tcp  --  anywhere             anywhere             tcp dpt:http-alt to:172.17.0.12:8080
root@ip-10-128-17-123:/var/log/upstart# iptables -t nat -D POSTROUTING 4
root@ip-10-128-17-123:/var/log/upstart# telnet 10.128.17.123 8080
Trying 10.128.17.123...
Connected to 10.128.17.123.
Escape character is '^]'.
QUIT
HTTP/1.1 400 Bad Request

I took a look in /var/log/upstart/docker.log and saw these errors:

ERRO[0110] Error on iptables delete: iptables failed: iptables --wait -t nat -D DOCKER -p tcp -d 0/0 --dport 8080 -j DNAT --to-destination 172.17.0.9:8080 ! -i docker0: iptables: Bad rule (does a matching rule exist in that chain?).
ERRO[0114] Error on iptables delete: iptables failed: iptables --wait -t nat -D DOCKER -p tcp -d 0/0 --dport 8080 -j DNAT --to-destination 172.17.0.10:8080 ! -i docker0: iptables: Bad rule (does a matching rule exist in that chain?).

Here's some more context around the first of those failures:

INFO[0110] POST /v1.19/containers/7acbfc25c5d18e492112b787754b60d674d3e6d341b7e8b4ec83fae54ea088cc/attach?stderr=1&stdout=1&stream=1
INFO[0110] POST /v1.19/containers/7acbfc25c5d18e492112b787754b60d674d3e6d341b7e8b4ec83fae54ea088cc/start
ERRO[0110] leaving endpoint failed: a container has already joined the endpoint
ERRO[0110] Error on iptables delete: iptables failed: iptables --wait -t nat -D DOCKER -p tcp -d 0/0 --dport 8080 -j DNAT --to-destination 172.17.0.9:8080 ! -i docker0: iptables: Bad rule (does a matching rule exist in that chain?).
 (exit status 1)
ERRO[0110] Handler for POST /containers/{name:.*}/start returned error: Cannot start container 7acbfc25c5d18e492112b787754b60d674d3e6d341b7e8b4ec83fae54ea088cc: Link not found
ERRO[0110] HTTP Error                                    err=Cannot start container 7acbfc25c5d18e492112b787754b60d674d3e6d341b7e8b4ec83fae54ea088cc: Link not found statusCode=404

Anyway, I'm going to keep trying to dig into this as this is a huge problem for us and I don't yet have a solution. Here's some more info about the environment:

# docker version
Client version: 1.7.1
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 786b29d
OS/Arch (client): linux/amd64
Server version: 1.7.1
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 786b29d
OS/Arch (server): linux/amd64

# docker info
Containers: 5
Images: 63
Storage Driver: devicemapper
 Pool Name: docker-202:112-3145729-pool
 Pool Blocksize: 65.54 kB
 Backing Filesystem: extfs
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 1.886 GB
 Data Space Total: 107.4 GB
 Data Space Available: 103.6 GB
 Metadata Space Used: 3.547 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.144 GB
 Udev Sync Supported: false
 Deferred Removal Enabled: false
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.82-git (2013-10-04)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-62-generic
Operating System: Ubuntu 14.04.3 LTS
CPUs: 4
Total Memory: 7.305 GiB
Name: ip-10-128-17-123
ID: 2RGH:PE2B:KOB2:QY4R:2ODG:RNFT:YCMJ:6OIS:GKWQ:LTUV:MQZF:HXZD
WARNING: No swap limit support

# uname -a
Linux ip-10-128-17-123 3.13.0-62-generic #102-Ubuntu SMP Tue Aug 11 14:29:36 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

phobologic commented Aug 30, 2015

So I think I've found at least partially what is causing this, though not what is causing what is causing this. It looks like the issue has to do with poor old NAT rules not being removed in some case. My current guess is that it's when the docker daemon is restarted, though I haven't yet been able to verify that.

Anyway, I downgraded my docker to docker 1.7.1 (which was not super simple with ubuntu packages - had to go back to lxc-docker from the old Ubuntu repo) and I still saw this problem popping up. That's when I started digging into how docker manages to move the traffic to the container when a port is exposed.

On a host where the container was running (and exporting to port 8080 on the host) where I couldn't connect, I took a look and saw this in the NAT table:

root@ip-10-128-17-123:/var/log/upstart# iptables -L -t nat --line-numbers
Chain PREROUTING (policy ACCEPT)
num  target     prot opt source               destination
1    DOCKER     all  --  anywhere             anywhere             ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination
1    DOCKER     all  --  anywhere            !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
num  target     prot opt source               destination
1    MASQUERADE  all  --  ip-172-17-0-0.ec2.internal/16  anywhere
2    MASQUERADE  udp  --  ip-172-17-0-6.ec2.internal  ip-172-17-0-6.ec2.internal  udp dpt:syslog
3    MASQUERADE  udp  --  ip-172-17-0-7.ec2.internal  ip-172-17-0-7.ec2.internal  udp dpt:syslog
4    MASQUERADE  tcp  --  ip-172-17-0-10.ec2.internal  ip-172-17-0-10.ec2.internal  tcp dpt:http-alt
5    MASQUERADE  tcp  --  ip-172-17-0-12.ec2.internal  ip-172-17-0-12.ec2.internal  tcp dpt:http-alt

Chain DOCKER (2 references)
num  target     prot opt source               destination
1    DNAT       udp  --  anywhere             anywhere             udp dpt:syslog to:172.17.0.6:514
2    DNAT       udp  --  anywhere             anywhere             udp dpt:11514 to:172.17.0.7:514
3    DNAT       tcp  --  anywhere             anywhere             tcp dpt:http-alt to:172.17.0.9:8080
4    DNAT       tcp  --  anywhere             anywhere             tcp dpt:http-alt to:172.17.0.10:8080
5    DNAT       tcp  --  anywhere             anywhere             tcp dpt:http-alt to:172.17.0.12:8080

ip-172-17-0-10 and ip-172-17-0-9 were both the IPs of a previous attempt at running the container that no longer exist (again, not sure why this is). I removed all rules referring to them, and I could then connect to my container from the host using any address on that host. Here's my NAT table after the fact when things were working once more:

root@ip-10-128-17-123:/var/log/upstart# iptables -t nat -D DOCKER 3
root@ip-10-128-17-123:/var/log/upstart# iptables -t nat -D DOCKER 3
root@ip-10-128-17-123:/var/log/upstart# iptables -L -t nat --line-numbers
Chain PREROUTING (policy ACCEPT)
num  target     prot opt source               destination
1    DOCKER     all  --  anywhere             anywhere             ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination
1    DOCKER     all  --  anywhere            !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
num  target     prot opt source               destination
1    MASQUERADE  all  --  ip-172-17-0-0.ec2.internal/16  anywhere
2    MASQUERADE  udp  --  ip-172-17-0-6.ec2.internal  ip-172-17-0-6.ec2.internal  udp dpt:syslog
3    MASQUERADE  udp  --  ip-172-17-0-7.ec2.internal  ip-172-17-0-7.ec2.internal  udp dpt:syslog
4    MASQUERADE  tcp  --  ip-172-17-0-10.ec2.internal  ip-172-17-0-10.ec2.internal  tcp dpt:http-alt
5    MASQUERADE  tcp  --  ip-172-17-0-12.ec2.internal  ip-172-17-0-12.ec2.internal  tcp dpt:http-alt

Chain DOCKER (2 references)
num  target     prot opt source               destination
1    DNAT       udp  --  anywhere             anywhere             udp dpt:syslog to:172.17.0.6:514
2    DNAT       udp  --  anywhere             anywhere             udp dpt:11514 to:172.17.0.7:514
3    DNAT       tcp  --  anywhere             anywhere             tcp dpt:http-alt to:172.17.0.12:8080
root@ip-10-128-17-123:/var/log/upstart# iptables -t nat -D POSTROUTING 4
root@ip-10-128-17-123:/var/log/upstart# telnet 10.128.17.123 8080
Trying 10.128.17.123...
Connected to 10.128.17.123.
Escape character is '^]'.
QUIT
HTTP/1.1 400 Bad Request

I took a look in /var/log/upstart/docker.log and saw these errors:

ERRO[0110] Error on iptables delete: iptables failed: iptables --wait -t nat -D DOCKER -p tcp -d 0/0 --dport 8080 -j DNAT --to-destination 172.17.0.9:8080 ! -i docker0: iptables: Bad rule (does a matching rule exist in that chain?).
ERRO[0114] Error on iptables delete: iptables failed: iptables --wait -t nat -D DOCKER -p tcp -d 0/0 --dport 8080 -j DNAT --to-destination 172.17.0.10:8080 ! -i docker0: iptables: Bad rule (does a matching rule exist in that chain?).

Here's some more context around the first of those failures:

INFO[0110] POST /v1.19/containers/7acbfc25c5d18e492112b787754b60d674d3e6d341b7e8b4ec83fae54ea088cc/attach?stderr=1&stdout=1&stream=1
INFO[0110] POST /v1.19/containers/7acbfc25c5d18e492112b787754b60d674d3e6d341b7e8b4ec83fae54ea088cc/start
ERRO[0110] leaving endpoint failed: a container has already joined the endpoint
ERRO[0110] Error on iptables delete: iptables failed: iptables --wait -t nat -D DOCKER -p tcp -d 0/0 --dport 8080 -j DNAT --to-destination 172.17.0.9:8080 ! -i docker0: iptables: Bad rule (does a matching rule exist in that chain?).
 (exit status 1)
ERRO[0110] Handler for POST /containers/{name:.*}/start returned error: Cannot start container 7acbfc25c5d18e492112b787754b60d674d3e6d341b7e8b4ec83fae54ea088cc: Link not found
ERRO[0110] HTTP Error                                    err=Cannot start container 7acbfc25c5d18e492112b787754b60d674d3e6d341b7e8b4ec83fae54ea088cc: Link not found statusCode=404

Anyway, I'm going to keep trying to dig into this as this is a huge problem for us and I don't yet have a solution. Here's some more info about the environment:

# docker version
Client version: 1.7.1
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 786b29d
OS/Arch (client): linux/amd64
Server version: 1.7.1
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 786b29d
OS/Arch (server): linux/amd64

# docker info
Containers: 5
Images: 63
Storage Driver: devicemapper
 Pool Name: docker-202:112-3145729-pool
 Pool Blocksize: 65.54 kB
 Backing Filesystem: extfs
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 1.886 GB
 Data Space Total: 107.4 GB
 Data Space Available: 103.6 GB
 Metadata Space Used: 3.547 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.144 GB
 Udev Sync Supported: false
 Deferred Removal Enabled: false
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.82-git (2013-10-04)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-62-generic
Operating System: Ubuntu 14.04.3 LTS
CPUs: 4
Total Memory: 7.305 GiB
Name: ip-10-128-17-123
ID: 2RGH:PE2B:KOB2:QY4R:2ODG:RNFT:YCMJ:6OIS:GKWQ:LTUV:MQZF:HXZD
WARNING: No swap limit support

# uname -a
Linux ip-10-128-17-123 3.13.0-62-generic #102-Ubuntu SMP Tue Aug 11 14:29:36 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
@stratosgear

This comment has been minimized.

Show comment
Hide comment
@stratosgear

stratosgear Aug 31, 2015

Just wanted to say bravo for such deep investigative work and acknowledge the effort you put into that...!

stratosgear commented Aug 31, 2015

Just wanted to say bravo for such deep investigative work and acknowledge the effort you put into that...!

@phobologic

This comment has been minimized.

Show comment
Hide comment
@phobologic

phobologic Aug 31, 2015

So I just realized that we have been running docker 1.7.0 in staging/production. I went ahead and rolled back to that in my test environment and have been repeatedly terminating instances (and letting AWS Autoscaling bring up new ones in their place) and I haven't seen this issue since. I've got about 6 rebuilds under my belt now with this, so it seems like this may have been introduced in 1.7.1.

Editing to add: I've since done over 40 host rebuilds, and not a single instance of this issue since moving to 1.7.0. Before this we'd see it about every 3-4th instance launch attmept.

phobologic commented Aug 31, 2015

So I just realized that we have been running docker 1.7.0 in staging/production. I went ahead and rolled back to that in my test environment and have been repeatedly terminating instances (and letting AWS Autoscaling bring up new ones in their place) and I haven't seen this issue since. I've got about 6 rebuilds under my belt now with this, so it seems like this may have been introduced in 1.7.1.

Editing to add: I've since done over 40 host rebuilds, and not a single instance of this issue since moving to 1.7.0. Before this we'd see it about every 3-4th instance launch attmept.

@phobologic

This comment has been minimized.

Show comment
Hide comment
@phobologic

phobologic Sep 2, 2015

Just rolled forward again to 1.7.1, this time using the old apt mirror (since the new one doesn't yet allow you to pull down old versions) and this issue started popping up almost immediately once more.

$ uname -a
Linux ip-10-128-11-153 3.13.0-62-generic #102-Ubuntu SMP Tue Aug 11 14:29:36 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

$ docker version
Client version: 1.7.1
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 786b29d
OS/Arch (client): linux/amd64
Server version: 1.7.1
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 786b29d
OS/Arch (server): linux/amd64

$ docker info
Containers: 5
Images: 63
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 73
 Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-62-generic
Operating System: Ubuntu 14.04.3 LTS
CPUs: 4
Total Memory: 7.305 GiB
Name: ip-10-128-11-153
ID: LMSQ:GE3Z:ORXW:RM6E:OPTH:KEOP:2APO:WQQ3:EZT2:KW6B:GMZ4:G2ZM
WARNING: No swap limit support

$ sudo iptables -L -t nat
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
DOCKER     all  --  anywhere             anywhere             ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
DOCKER     all  --  anywhere            !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
MASQUERADE  all  --  ip-172-17-0-0.ec2.internal/16  anywhere
MASQUERADE  udp  --  ip-172-17-0-6.ec2.internal  ip-172-17-0-6.ec2.internal  udp dpt:syslog
MASQUERADE  tcp  --  ip-172-17-0-7.ec2.internal  ip-172-17-0-7.ec2.internal  tcp dpt:http-alt
MASQUERADE  udp  --  ip-172-17-0-8.ec2.internal  ip-172-17-0-8.ec2.internal  udp dpt:syslog
MASQUERADE  tcp  --  ip-172-17-0-9.ec2.internal  ip-172-17-0-9.ec2.internal  tcp dpt:http-alt
MASQUERADE  tcp  --  ip-172-17-0-12.ec2.internal  ip-172-17-0-12.ec2.internal  tcp dpt:http-alt

Chain DOCKER (2 references)
target     prot opt source               destination
DNAT       tcp  --  anywhere             anywhere             tcp dpt:http-alt to:172.17.0.5:8080
DNAT       udp  --  anywhere             anywhere             udp dpt:11514 to:172.17.0.6:514
DNAT       tcp  --  anywhere             anywhere             tcp dpt:http-alt to:172.17.0.7:8080
DNAT       udp  --  anywhere             anywhere             udp dpt:syslog to:172.17.0.8:514
DNAT       tcp  --  anywhere             anywhere             tcp dpt:http-alt to:172.17.0.9:8080
DNAT       tcp  --  anywhere             anywhere             tcp dpt:http-alt to:172.17.0.12:8080

phobologic commented Sep 2, 2015

Just rolled forward again to 1.7.1, this time using the old apt mirror (since the new one doesn't yet allow you to pull down old versions) and this issue started popping up almost immediately once more.

$ uname -a
Linux ip-10-128-11-153 3.13.0-62-generic #102-Ubuntu SMP Tue Aug 11 14:29:36 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

$ docker version
Client version: 1.7.1
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 786b29d
OS/Arch (client): linux/amd64
Server version: 1.7.1
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 786b29d
OS/Arch (server): linux/amd64

$ docker info
Containers: 5
Images: 63
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 73
 Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-62-generic
Operating System: Ubuntu 14.04.3 LTS
CPUs: 4
Total Memory: 7.305 GiB
Name: ip-10-128-11-153
ID: LMSQ:GE3Z:ORXW:RM6E:OPTH:KEOP:2APO:WQQ3:EZT2:KW6B:GMZ4:G2ZM
WARNING: No swap limit support

$ sudo iptables -L -t nat
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
DOCKER     all  --  anywhere             anywhere             ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
DOCKER     all  --  anywhere            !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
MASQUERADE  all  --  ip-172-17-0-0.ec2.internal/16  anywhere
MASQUERADE  udp  --  ip-172-17-0-6.ec2.internal  ip-172-17-0-6.ec2.internal  udp dpt:syslog
MASQUERADE  tcp  --  ip-172-17-0-7.ec2.internal  ip-172-17-0-7.ec2.internal  tcp dpt:http-alt
MASQUERADE  udp  --  ip-172-17-0-8.ec2.internal  ip-172-17-0-8.ec2.internal  udp dpt:syslog
MASQUERADE  tcp  --  ip-172-17-0-9.ec2.internal  ip-172-17-0-9.ec2.internal  tcp dpt:http-alt
MASQUERADE  tcp  --  ip-172-17-0-12.ec2.internal  ip-172-17-0-12.ec2.internal  tcp dpt:http-alt

Chain DOCKER (2 references)
target     prot opt source               destination
DNAT       tcp  --  anywhere             anywhere             tcp dpt:http-alt to:172.17.0.5:8080
DNAT       udp  --  anywhere             anywhere             udp dpt:11514 to:172.17.0.6:514
DNAT       tcp  --  anywhere             anywhere             tcp dpt:http-alt to:172.17.0.7:8080
DNAT       udp  --  anywhere             anywhere             udp dpt:syslog to:172.17.0.8:514
DNAT       tcp  --  anywhere             anywhere             tcp dpt:http-alt to:172.17.0.9:8080
DNAT       tcp  --  anywhere             anywhere             tcp dpt:http-alt to:172.17.0.12:8080
@Teudimundo

This comment has been minimized.

Show comment
Hide comment
@Teudimundo

Teudimundo Sep 29, 2015

Just to say that the same issue was present already on 1.6.2. We solved the issue by deleting the stale iptables rules.

Last time it happened after we had an error such as the one explained in #8539. But since this happened other times simply by stopping, removing and recreating the container (through compose), I don't really know whether the two issues are really related.

Teudimundo commented Sep 29, 2015

Just to say that the same issue was present already on 1.6.2. We solved the issue by deleting the stale iptables rules.

Last time it happened after we had an error such as the one explained in #8539. But since this happened other times simply by stopping, removing and recreating the container (through compose), I don't really know whether the two issues are really related.

@mavenugo

This comment has been minimized.

Show comment
Hide comment
@mavenugo

mavenugo Sep 29, 2015

Contributor

ping @aboch. can you PTAL at this ?

Contributor

mavenugo commented Sep 29, 2015

ping @aboch. can you PTAL at this ?

@stratosgear

This comment has been minimized.

Show comment
Hide comment
@stratosgear

stratosgear Sep 30, 2015

@Teudimundo How do you actually go about deleting the stale iptables rules? I'm still having issues with this, well, issue... Thanks!

stratosgear commented Sep 30, 2015

@Teudimundo How do you actually go about deleting the stale iptables rules? I'm still having issues with this, well, issue... Thanks!

@Teudimundo

This comment has been minimized.

Show comment
Hide comment
@Teudimundo

Teudimundo Sep 30, 2015

@stratosgear I followed the istructions of @phobologic. You look for the port you are trying to expose and you will find rules forwarding that port to an ip of a container that is no longer running. You need to delete that (or those rules). Using iptables -L -t nat --line-numbers you find the rules, then you delete them using the corresponding number. Be careful that every time you delete a rule, line numbers of following rules changes, so it's better to list the rules after each removal to see the line numbers of the next rule you want to delete.

Teudimundo commented Sep 30, 2015

@stratosgear I followed the istructions of @phobologic. You look for the port you are trying to expose and you will find rules forwarding that port to an ip of a container that is no longer running. You need to delete that (or those rules). Using iptables -L -t nat --line-numbers you find the rules, then you delete them using the corresponding number. Be careful that every time you delete a rule, line numbers of following rules changes, so it's better to list the rules after each removal to see the line numbers of the next rule you want to delete.

@glasser

This comment has been minimized.

Show comment
Hide comment
@glasser

glasser Oct 2, 2015

Contributor

It seems like you should be able to work around this with a cronjob that checks the DOCKER chain in iptables for duplicates and if found uses the Docker API to figure out what the current IP for the port is and remove the bad ones.

Right now we're seeing tons of orphaned rules in our iptables chains; only for the services with a fixed port does it actually cause an observable problem, but the issue seems to happen all the time.

Contributor

glasser commented Oct 2, 2015

It seems like you should be able to work around this with a cronjob that checks the DOCKER chain in iptables for duplicates and if found uses the Docker API to figure out what the current IP for the port is and remove the bad ones.

Right now we're seeing tons of orphaned rules in our iptables chains; only for the services with a fixed port does it actually cause an observable problem, but the issue seems to happen all the time.

@mindscratch

This comment has been minimized.

Show comment
Hide comment
@mindscratch

mindscratch Oct 7, 2015

Seeing similar problems here as well with Docker 1.7.1 on CentOS 7 (iptables version 1.4.21).

mindscratch commented Oct 7, 2015

Seeing similar problems here as well with Docker 1.7.1 on CentOS 7 (iptables version 1.4.21).

@glasser

This comment has been minimized.

Show comment
Hide comment
@glasser

glasser Oct 9, 2015

Contributor

If docker changed to add rules to the DOCKER chain with -I (insert at front) instead of -A (append), it would work around this bug: even though the old bad rules would still be there, at least the new rules would take precedence.

I've written a little tool that I'm going to try to run in a cronjob every minute in production: https://gist.github.com/glasser/0486d98073ce15f38b9d

Contributor

glasser commented Oct 9, 2015

If docker changed to add rules to the DOCKER chain with -I (insert at front) instead of -A (append), it would work around this bug: even though the old bad rules would still be there, at least the new rules would take precedence.

I've written a little tool that I'm going to try to run in a cronjob every minute in production: https://gist.github.com/glasser/0486d98073ce15f38b9d

phobologic added a commit to remind101/empire_ami that referenced this issue Oct 11, 2015

Lock to docker 1.7.0, fix ecs tcs mounts
After docker 1.7.0 we (Remind) have seen this issue pop up:

moby/moby#13914

Now that this is closed:

moby/moby#16001

We can use the new docker-engine packages, but with an old version. I'm
moving the empire_ami back to docker 1.7.0 till 13914 above is fixed.

As well, this should fix ECS stats - we just needed a bunch of volumes,
per:

aws/amazon-ecs-agent#174

Doing this internally @ Remind fixed this.
@chintansheth

This comment has been minimized.

Show comment
Hide comment
@chintansheth

chintansheth Nov 20, 2015

Yes I have same problem with 1.9.0

chintansheth commented Nov 20, 2015

Yes I have same problem with 1.9.0

@deadlyicon

This comment has been minimized.

Show comment
Hide comment
@deadlyicon

deadlyicon Dec 19, 2015

I'm having the same problem:

☃ docker -v
Docker version 1.9.1, build a34a1d5

☃ docker info
Containers: 2
Images: 39
Server Version: 1.9.1
Storage Driver: aufs
Root Dir: /mnt/sda1/var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 43
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.1.13-boot2docker
Operating System: Boot2Docker 1.9.1 (TCL 6.4.1); master : cef800b - Fri Nov 20 19:33:59 UTC 2015
CPUs: 1
Total Memory: 996.2 MiB
Name: dev2
ID: 4WBP:AYEU:CMMD:OJ2W:V3WC:TLW4:4JM6:XWZC:PFEE:CKDH:GPQK:6JWZ
Debug mode (server): true
File Descriptors: 33
Goroutines: 51
System Time: 2015-12-19T00:44:39.753531311Z
EventsListeners: 0
Init SHA1:
Init Path: /usr/local/bin/docker
Docker Root Dir: /mnt/sda1/var/lib/docker
Labels:
provider=virtualbox

☃ uname -a
Darwin x.local 15.2.0 Darwin Kernel Version 15.2.0: Fri Nov 13 19:56:56 PST 2015; root:xnu-3248.20.55~2/RELEASE_X86_64 x86_64

deadlyicon commented Dec 19, 2015

I'm having the same problem:

☃ docker -v
Docker version 1.9.1, build a34a1d5

☃ docker info
Containers: 2
Images: 39
Server Version: 1.9.1
Storage Driver: aufs
Root Dir: /mnt/sda1/var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 43
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.1.13-boot2docker
Operating System: Boot2Docker 1.9.1 (TCL 6.4.1); master : cef800b - Fri Nov 20 19:33:59 UTC 2015
CPUs: 1
Total Memory: 996.2 MiB
Name: dev2
ID: 4WBP:AYEU:CMMD:OJ2W:V3WC:TLW4:4JM6:XWZC:PFEE:CKDH:GPQK:6JWZ
Debug mode (server): true
File Descriptors: 33
Goroutines: 51
System Time: 2015-12-19T00:44:39.753531311Z
EventsListeners: 0
Init SHA1:
Init Path: /usr/local/bin/docker
Docker Root Dir: /mnt/sda1/var/lib/docker
Labels:
provider=virtualbox

☃ uname -a
Darwin x.local 15.2.0 Darwin Kernel Version 15.2.0: Fri Nov 13 19:56:56 PST 2015; root:xnu-3248.20.55~2/RELEASE_X86_64 x86_64

@kusmierz kusmierz referenced this issue Mar 15, 2016

Open

ipv6 #81

@x110dc

This comment has been minimized.

Show comment
Hide comment
@x110dc

x110dc Mar 15, 2016

In case it's helpful anyone: I just ran into this problem with 1.10.3 and restarting the docker daemon corrected it.

 Version:      1.10.3
 API version:  1.22
 Go version:   go1.5.3
 Git commit:   20f81dd
 Built:        Thu Mar 10 15:54:52 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.10.3
 API version:  1.22
 Go version:   go1.5.3
 Git commit:   20f81dd
 Built:        Thu Mar 10 15:54:52 2016
 OS/Arch:      linux/amd64

x110dc commented Mar 15, 2016

In case it's helpful anyone: I just ran into this problem with 1.10.3 and restarting the docker daemon corrected it.

 Version:      1.10.3
 API version:  1.22
 Go version:   go1.5.3
 Git commit:   20f81dd
 Built:        Thu Mar 10 15:54:52 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.10.3
 API version:  1.22
 Go version:   go1.5.3
 Git commit:   20f81dd
 Built:        Thu Mar 10 15:54:52 2016
 OS/Arch:      linux/amd64
@iargent

This comment has been minimized.

Show comment
Hide comment
@iargent

iargent Mar 27, 2016

I just had the exact same problem with 1.10.3, and it means that the final part of the introductory tutorial was broken for me. I was able to get it working by adding --ip=0.0.0.0 in the ExecStart line of /etc/systemd/system/multi-user.target.wants/docker.service

iargent commented Mar 27, 2016

I just had the exact same problem with 1.10.3, and it means that the final part of the introductory tutorial was broken for me. I was able to get it working by adding --ip=0.0.0.0 in the ExecStart line of /etc/systemd/system/multi-user.target.wants/docker.service

@jjsnyder

This comment has been minimized.

Show comment
Hide comment
@jjsnyder

jjsnyder Apr 1, 2016

I am having the same issue using 1.10.3 on ubuntu 14.04. The only way I can access exposed ports is when I start the container using --net=host.

I have tried adding --ip=0.0.0.0 to DOCKER_OPTS in /etc/default/docker and restarting the daemon but that does not work. Anyone else able to make it work on ubuntu?

jjsnyder commented Apr 1, 2016

I am having the same issue using 1.10.3 on ubuntu 14.04. The only way I can access exposed ports is when I start the container using --net=host.

I have tried adding --ip=0.0.0.0 to DOCKER_OPTS in /etc/default/docker and restarting the daemon but that does not work. Anyone else able to make it work on ubuntu?

@jjsnyder

This comment has been minimized.

Show comment
Hide comment
@jjsnyder

jjsnyder Apr 4, 2016

I modified /etc/stsctl.conf and added/uncommented

net.ipv6.conf.all.forwarding=1

After rebooting the ports are accessible with the bridge network. However if I use VPN then I am back to being unable to access the ports. Any suggestions?

jjsnyder commented Apr 4, 2016

I modified /etc/stsctl.conf and added/uncommented

net.ipv6.conf.all.forwarding=1

After rebooting the ports are accessible with the bridge network. However if I use VPN then I am back to being unable to access the ports. Any suggestions?

@jjsnyder

This comment has been minimized.

Show comment
Hide comment
@jjsnyder

jjsnyder Apr 4, 2016

I switched to OpenConnect from Cisco anyConnect and it all works now.

jjsnyder commented Apr 4, 2016

I switched to OpenConnect from Cisco anyConnect and it all works now.

@smougenot

This comment has been minimized.

Show comment
Hide comment
@smougenot

smougenot May 1, 2016

If you want your container ports to bind on your ipv4 addresse, just :

  • edit /etc/sysconfig/docker-network
    • add DOCKER_NETWORK_OPTIONS=-ip=xx.xx.xx.xx
    • xx.xx.xx.xx being your real ipv4 (and not 0.0.0.0)
  • restart docker deamon

works for me on docker 1.9.1

smougenot commented May 1, 2016

If you want your container ports to bind on your ipv4 addresse, just :

  • edit /etc/sysconfig/docker-network
    • add DOCKER_NETWORK_OPTIONS=-ip=xx.xx.xx.xx
    • xx.xx.xx.xx being your real ipv4 (and not 0.0.0.0)
  • restart docker deamon

works for me on docker 1.9.1

@joshuacox

This comment has been minimized.

Show comment
Hide comment
@joshuacox

joshuacox May 11, 2016

Any clue where this might be in debian? I tried adding it to /etc/default/docker

DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4"

DOCKER_NETWORK_OPTIONS="--ip=x.x.x.x"

and also as a part of DOCKER_OPTS:

DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4 --ip=x.x.x.x"

I've recently upgraded to 1.11.1:

docker version
Client:
 Version:      1.11.1
 API version:  1.23
 Go version:   go1.5.4
 Git commit:   5604cbe
 Built:        Tue Apr 26 23:11:07 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.11.1
 API version:  1.23
 Go version:   go1.5.4
 Git commit:   5604cbe
 Built:        Tue Apr 26 23:11:07 2016
 OS/Arch:      linux/amd64

Still I get containers that listen on IPv6 only, adding --net=host doesn't seem to help either?

joshuacox commented May 11, 2016

Any clue where this might be in debian? I tried adding it to /etc/default/docker

DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4"

DOCKER_NETWORK_OPTIONS="--ip=x.x.x.x"

and also as a part of DOCKER_OPTS:

DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4 --ip=x.x.x.x"

I've recently upgraded to 1.11.1:

docker version
Client:
 Version:      1.11.1
 API version:  1.23
 Go version:   go1.5.4
 Git commit:   5604cbe
 Built:        Tue Apr 26 23:11:07 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.11.1
 API version:  1.23
 Go version:   go1.5.4
 Git commit:   5604cbe
 Built:        Tue Apr 26 23:11:07 2016
 OS/Arch:      linux/amd64

Still I get containers that listen on IPv6 only, adding --net=host doesn't seem to help either?

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah May 11, 2016

Member

@joshuacox what version of debian are you running? If it's a version that uses systemd, then /etc/default/docker is not used

Member

thaJeztah commented May 11, 2016

@joshuacox what version of debian are you running? If it's a version that uses systemd, then /etc/default/docker is not used

@JivanAmara

This comment has been minimized.

Show comment
Hide comment
@JivanAmara

JivanAmara Jun 3, 2016

I'm seeing the same issue on Ubuntu 14.04. The --ip flag doesn't change anything the --net=host option allows connections to the mapped ports.

$ docker -v
Docker version 1.11.1, build 5604cbe
$ docker info
Containers: 2
Running: 0
Paused: 0
Stopped: 2
Images: 99
Server Version: 1.11.1
Storage Driver: aufs
Root Dir: /mnt/SecondHD/docker/aufs
Backing Filesystem: extfs
Dirs: 145
Dirperm1 Supported: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge null host
Kernel Version: 3.13.0-87-generic
Operating System: Ubuntu 14.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 15.37 GiB
Name: Nightfall
ID: B4IS:XYPJ:RKMT:CBAR:CPXY:OEE7:BD4J:KS53:Z625:XXJQ:HSTS:2BU4
Docker Root Dir: /mnt/SecondHD/docker
Debug mode (client): false
Debug mode (server): false
Username: jivanamara
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
(nbtop5)jivan@Nightfall:~/.projects/nbtop5$

JivanAmara commented Jun 3, 2016

I'm seeing the same issue on Ubuntu 14.04. The --ip flag doesn't change anything the --net=host option allows connections to the mapped ports.

$ docker -v
Docker version 1.11.1, build 5604cbe
$ docker info
Containers: 2
Running: 0
Paused: 0
Stopped: 2
Images: 99
Server Version: 1.11.1
Storage Driver: aufs
Root Dir: /mnt/SecondHD/docker/aufs
Backing Filesystem: extfs
Dirs: 145
Dirperm1 Supported: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge null host
Kernel Version: 3.13.0-87-generic
Operating System: Ubuntu 14.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 15.37 GiB
Name: Nightfall
ID: B4IS:XYPJ:RKMT:CBAR:CPXY:OEE7:BD4J:KS53:Z625:XXJQ:HSTS:2BU4
Docker Root Dir: /mnt/SecondHD/docker
Debug mode (client): false
Debug mode (server): false
Username: jivanamara
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
(nbtop5)jivan@Nightfall:~/.projects/nbtop5$

@marcusnielsen

This comment has been minimized.

Show comment
Hide comment
@marcusnielsen

marcusnielsen Jul 6, 2016

Docker version 1.11.2, build b9f10c9
Ubuntu 14.04
It doesn't work without --net=host option.

marcusnielsen commented Jul 6, 2016

Docker version 1.11.2, build b9f10c9
Ubuntu 14.04
It doesn't work without --net=host option.

@hmeerlo

This comment has been minimized.

Show comment
Hide comment
@hmeerlo

hmeerlo Jul 15, 2016

Ok, same problem here on Ubuntu 14.04. No leftover iptables rules. Works fine on another machine with docker 0.10.3.
I must note that I have 2 network interfaces and I explicitly map the port to one of the interfaces (eth1). I can telnet locally to the bound port on eth1 (443), but when external traffic comes in on eth1 port 443 it goes down the drain.

ubuntu@ip-172-31-0-235:$ docker -v
Docker version 1.11.2, build b9f10c9
ubuntu@ip-172-31-0-235:
$ docker info
Containers: 5
Running: 5
Paused: 0
Stopped: 0
Images: 5
Server Version: 1.11.2
Storage Driver: devicemapper
Pool Name: docker-202:1-149173-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: ext4
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 3.063 GB
Data Space Total: 107.4 GB
Data Space Available: 3.914 GB
Metadata Space Used: 4.354 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.143 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
WARNING: Usage of loopback devices is strongly discouraged for production use. Either use --storage-opt dm.thinpooldev or use --storage-opt dm.no_warn_on_loop_devices=true to suppress this warning.
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.77 (2012-10-15)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: host bridge null
Kernel Version: 3.13.0-74-generic
Operating System: Ubuntu 14.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.954 GiB
Name: ip-172-31-0-235
ID: POID:OYMX:63CU:HF5U:WBJO:MMTN:3ST3:BJUZ:DDNR:S4WI:6MNO:BPGL
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
ubuntu@ip-172-31-0-235:~$

hmeerlo commented Jul 15, 2016

Ok, same problem here on Ubuntu 14.04. No leftover iptables rules. Works fine on another machine with docker 0.10.3.
I must note that I have 2 network interfaces and I explicitly map the port to one of the interfaces (eth1). I can telnet locally to the bound port on eth1 (443), but when external traffic comes in on eth1 port 443 it goes down the drain.

ubuntu@ip-172-31-0-235:$ docker -v
Docker version 1.11.2, build b9f10c9
ubuntu@ip-172-31-0-235:
$ docker info
Containers: 5
Running: 5
Paused: 0
Stopped: 0
Images: 5
Server Version: 1.11.2
Storage Driver: devicemapper
Pool Name: docker-202:1-149173-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: ext4
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 3.063 GB
Data Space Total: 107.4 GB
Data Space Available: 3.914 GB
Metadata Space Used: 4.354 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.143 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
WARNING: Usage of loopback devices is strongly discouraged for production use. Either use --storage-opt dm.thinpooldev or use --storage-opt dm.no_warn_on_loop_devices=true to suppress this warning.
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.77 (2012-10-15)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: host bridge null
Kernel Version: 3.13.0-74-generic
Operating System: Ubuntu 14.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.954 GiB
Name: ip-172-31-0-235
ID: POID:OYMX:63CU:HF5U:WBJO:MMTN:3ST3:BJUZ:DDNR:S4WI:6MNO:BPGL
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
ubuntu@ip-172-31-0-235:~$

@GaalDornick

This comment has been minimized.

Show comment
Hide comment
@GaalDornick

GaalDornick Jul 21, 2016

I am having the same problem. I am trying to dockerize HDFS. HDFS has 2 components: namenode and datanode. NameNode is roughly the "master". Datanode is the "worker". The Namenode is responsible for manaing one or more datanodes
The NameNode exposes multiple ports. The important ones are 50070 and 8020. 50070 is the web admin console for HDFS. 8020 is the port that DataNode communicates over. I stood up a NameNode, and exposed both 50070 and 8020. I can access 50070 from the host machine, but cannot access 8020. when I telnet to 8020 on the NameNode, it connects but immediately closes connection.

I stood up a Datanode and linked the namenode to the datanode. The datanode isn;t able to register itself to the namenode. When I telnet into 8020 on the NameNode from the DataNode, I see the same problem. The connection drops as soon as it connects
namenode.zip

GaalDornick commented Jul 21, 2016

I am having the same problem. I am trying to dockerize HDFS. HDFS has 2 components: namenode and datanode. NameNode is roughly the "master". Datanode is the "worker". The Namenode is responsible for manaing one or more datanodes
The NameNode exposes multiple ports. The important ones are 50070 and 8020. 50070 is the web admin console for HDFS. 8020 is the port that DataNode communicates over. I stood up a NameNode, and exposed both 50070 and 8020. I can access 50070 from the host machine, but cannot access 8020. when I telnet to 8020 on the NameNode, it connects but immediately closes connection.

I stood up a Datanode and linked the namenode to the datanode. The datanode isn;t able to register itself to the namenode. When I telnet into 8020 on the NameNode from the DataNode, I see the same problem. The connection drops as soon as it connects
namenode.zip

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Jul 22, 2016

Contributor

@GaalDornick When you are on the host and you telnet to the exposed port, it goes through a proxy process, which is probably why telnet is connecting.

To me it sounds like nothing is listening on 8020, or perhaps more correct, nothing is listening on 8020 on eth0 in the container where the forwarded port should be going to.

Contributor

cpuguy83 commented Jul 22, 2016

@GaalDornick When you are on the host and you telnet to the exposed port, it goes through a proxy process, which is probably why telnet is connecting.

To me it sounds like nothing is listening on 8020, or perhaps more correct, nothing is listening on 8020 on eth0 in the container where the forwarded port should be going to.

@GaalDornick

This comment has been minimized.

Show comment
Hide comment
@GaalDornick

GaalDornick Jul 22, 2016

@cpuguy83 When I create a bash shell in the container and telnet to localhost 8020, it connects. It's a standard hadoop name node. It listens on both ports 50070 and 8020. It's not logging any errors. I've never seen a case where Namenode listens on one port but not the other. If NameNode is not able to listen on 8020, I would expect it to log errors or stop

GaalDornick commented Jul 22, 2016

@cpuguy83 When I create a bash shell in the container and telnet to localhost 8020, it connects. It's a standard hadoop name node. It listens on both ports 50070 and 8020. It's not logging any errors. I've never seen a case where Namenode listens on one port but not the other. If NameNode is not able to listen on 8020, I would expect it to log errors or stop

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Jul 22, 2016

Contributor

@GaalDornick That's my point. localhost inside the container is not the same as localhost outside the container (or localhost between containers). Ports can't be forwarded to localhost in the container. If the process is only listening on localhost then nothing can be forwarded to it.

Contributor

cpuguy83 commented Jul 22, 2016

@GaalDornick That's my point. localhost inside the container is not the same as localhost outside the container (or localhost between containers). Ports can't be forwarded to localhost in the container. If the process is only listening on localhost then nothing can be forwarded to it.

@GaalDornick

This comment has been minimized.

Show comment
Hide comment
@GaalDornick

GaalDornick Jul 22, 2016

Ahh I see. How do I check where is hadoop listening on port 8020. Normally i run netstat -tulpn to find which ports are bound to which processes. But, I can't install netstatinside the container

I do see something in the Hadoop logs. When it listens to 50070, it logs this

/proc/130/task/130/fd/286:2016-07-22 15:24:53,551 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-
server for hdfs at: http://**0.0.0.0**:50070
/proc/130/task/130/fd/286:2016-07-22 15:24:53,760 INFO org.apache.hadoop.http.HttpServer2: Jetty bou
nd to port 50070
/proc/130/task/130/fd/286:2016-07-22 15:24:54,236 INFO org.mortbay.log: Started HttpServer2$SelectCh
annelConnectorWithSafeStartup@0.0.0.0:50070

When it listens on 8020, it logs this

/proc/130/task/130/fd/286:2016-07-22 15:24:52,989 INFO org.apache.hadoop.hdfs.server.namenode.Nam
de: fs.defaultFS is hdfs://**127.0.0.1**:8020
/proc/130/task/130/fd/286:2016-07-22 15:24:52,990 INFO org.apache.hadoop.hdfs.server.namenode.Nam
de: Clients are to use 127.0.0.1:8020 to access this namenode/service.
/proc/130/task/130/fd/286:2016-07-22 15:24:55,570 INFO org.apache.hadoop.hdfs.server.namenode.Nam
de: RPC server is binding to localhost:8020
/proc/130/task/130/fd/286:2016-07-22 15:24:55,608 INFO org.apache.hadoop.ipc.Server: Starting Soc
 Reader #1 for port 8020
/proc/130/task/130/fd/286:2016-07-22 15:24:55,804 INFO org.apache.hadoop.ipc.Server: IPC Server l
ener on 8020: starting
/proc/130/task/130/fd/286:2016-07-22 15:24:55,811 INFO org.apache.hadoop.hdfs.server.namenode.Nam
de: NameNode RPC up at: localhost/127.0.0.1:8020

The one that works binds to 0.0.0.0, and the one that doesn't binds to 127.0.0.1. Could this cause the problem?

GaalDornick commented Jul 22, 2016

Ahh I see. How do I check where is hadoop listening on port 8020. Normally i run netstat -tulpn to find which ports are bound to which processes. But, I can't install netstatinside the container

I do see something in the Hadoop logs. When it listens to 50070, it logs this

/proc/130/task/130/fd/286:2016-07-22 15:24:53,551 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-
server for hdfs at: http://**0.0.0.0**:50070
/proc/130/task/130/fd/286:2016-07-22 15:24:53,760 INFO org.apache.hadoop.http.HttpServer2: Jetty bou
nd to port 50070
/proc/130/task/130/fd/286:2016-07-22 15:24:54,236 INFO org.mortbay.log: Started HttpServer2$SelectCh
annelConnectorWithSafeStartup@0.0.0.0:50070

When it listens on 8020, it logs this

/proc/130/task/130/fd/286:2016-07-22 15:24:52,989 INFO org.apache.hadoop.hdfs.server.namenode.Nam
de: fs.defaultFS is hdfs://**127.0.0.1**:8020
/proc/130/task/130/fd/286:2016-07-22 15:24:52,990 INFO org.apache.hadoop.hdfs.server.namenode.Nam
de: Clients are to use 127.0.0.1:8020 to access this namenode/service.
/proc/130/task/130/fd/286:2016-07-22 15:24:55,570 INFO org.apache.hadoop.hdfs.server.namenode.Nam
de: RPC server is binding to localhost:8020
/proc/130/task/130/fd/286:2016-07-22 15:24:55,608 INFO org.apache.hadoop.ipc.Server: Starting Soc
 Reader #1 for port 8020
/proc/130/task/130/fd/286:2016-07-22 15:24:55,804 INFO org.apache.hadoop.ipc.Server: IPC Server l
ener on 8020: starting
/proc/130/task/130/fd/286:2016-07-22 15:24:55,811 INFO org.apache.hadoop.hdfs.server.namenode.Nam
de: NameNode RPC up at: localhost/127.0.0.1:8020

The one that works binds to 0.0.0.0, and the one that doesn't binds to 127.0.0.1. Could this cause the problem?

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Jul 22, 2016

Contributor

@GaalDornick Most likely that's it.
You can check netstat by doing docker run --net container:<name> and start an image with netstat (or something you can install netstat into).

Contributor

cpuguy83 commented Jul 22, 2016

@GaalDornick Most likely that's it.
You can check netstat by doing docker run --net container:<name> and start an image with netstat (or something you can install netstat into).

@GaalDornick

This comment has been minimized.

Show comment
Hide comment
@GaalDornick

GaalDornick Jul 22, 2016

YES!! That worked. Thanks @cpuguy83

Just in case someone else is facing this problem Configure your application to listen to 0.0.0.0 not 127.0.0.1 I bet Docker's documentation probably spells this out somewhere and I missed it

GaalDornick commented Jul 22, 2016

YES!! That worked. Thanks @cpuguy83

Just in case someone else is facing this problem Configure your application to listen to 0.0.0.0 not 127.0.0.1 I bet Docker's documentation probably spells this out somewhere and I missed it

@valtoni

This comment has been minimized.

Show comment
Hide comment
@valtoni

valtoni Oct 30, 2016

Same trouble here. I'm running docker version 1.12.3 build 6b644ec.
Trying to run couchbase:
docker run -d --name db -p 8091 -p 11210:11210 couchbase
Without --net=host option cannot access docker host:
docker run --net=host -d --name db -p 8091 -p 11210:11210 couchbase

Do not have any special configurations and nat table is empty.

valtoni commented Oct 30, 2016

Same trouble here. I'm running docker version 1.12.3 build 6b644ec.
Trying to run couchbase:
docker run -d --name db -p 8091 -p 11210:11210 couchbase
Without --net=host option cannot access docker host:
docker run --net=host -d --name db -p 8091 -p 11210:11210 couchbase

Do not have any special configurations and nat table is empty.

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Oct 30, 2016

Member

@valtoni please open a new issue with at least the output of docker version and docker info, and the exact steps to reproduce (including how you're trying to connect). I'm not sure there's a bug though, more likely a configuration issue.

Member

thaJeztah commented Oct 30, 2016

@valtoni please open a new issue with at least the output of docker version and docker info, and the exact steps to reproduce (including how you're trying to connect). I'm not sure there's a bug though, more likely a configuration issue.

@warmchang

This comment has been minimized.

Show comment
Hide comment
@warmchang

warmchang Jan 16, 2017

+1. I had the same problem.

warmchang commented Jan 16, 2017

+1. I had the same problem.

@akamalov

This comment has been minimized.

Show comment
Hide comment
@akamalov

akamalov Jan 19, 2017

Greetings,

Running the following enviornment:

[root@mslave1 executors]# docker version
Client:
 Version:      1.12.6
 API version:  1.24
 Go version:   go1.6.4
 Git commit:   78d1802
 Built:        Tue Jan 10 20:20:01 2017
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.6
 API version:  1.24
 Go version:   go1.6.4
 Git commit:   78d1802
 Built:        Tue Jan 10 20:20:01 2017
 OS/Arch:      linux/amd64
[root@mslave1 executors]# 

Trying to deploy ETCD cluster in HOST mode:

[root@mslave1 executors]# docker run -d --net=host --userns=host akamalov/etcd-docker:3.1.7008
4762fd7c620edb65406056fcdaec4d3ff17351bc902db9012f13178c1825a54c
[root@mslave1 executors]#

Problem:

Deployed container only binds to interface 127.0.0.1. Tried to set /etc/sysconfig/docker-network to - DOCKER_NETWORK_OPTIONS=-ip=192.168.120.161, but it didn't help (also tried to set it to 0.0.0.0 without any result).

Below is the ETCD ports (2379, 2380) only binding to localhost interface 127.0.0.1:

[root@mslave1 executors]# ss -lnt
State       Recv-Q Send-Q                                             Local Address:Port                                                            Peer Address:Port              
LISTEN      0      64                                                             *:35113                                                                      *:*                  
LISTEN      0      128                                                192.168.120.161:7946                                                                       *:*                  
LISTEN      0      128                                                    127.0.0.1:2379                                                                       *:*                  
LISTEN      0      128                                                    127.0.0.1:7979                                                                       *:*                  
LISTEN      0      128                                                    127.0.0.1:2380                                                                       *:*                  
LISTEN      0      128                                                            *:4750                                                                       *:*                  
LISTEN      0      128                                                            *:111                                                                        *:*       

Any ideas?

akamalov commented Jan 19, 2017

Greetings,

Running the following enviornment:

[root@mslave1 executors]# docker version
Client:
 Version:      1.12.6
 API version:  1.24
 Go version:   go1.6.4
 Git commit:   78d1802
 Built:        Tue Jan 10 20:20:01 2017
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.6
 API version:  1.24
 Go version:   go1.6.4
 Git commit:   78d1802
 Built:        Tue Jan 10 20:20:01 2017
 OS/Arch:      linux/amd64
[root@mslave1 executors]# 

Trying to deploy ETCD cluster in HOST mode:

[root@mslave1 executors]# docker run -d --net=host --userns=host akamalov/etcd-docker:3.1.7008
4762fd7c620edb65406056fcdaec4d3ff17351bc902db9012f13178c1825a54c
[root@mslave1 executors]#

Problem:

Deployed container only binds to interface 127.0.0.1. Tried to set /etc/sysconfig/docker-network to - DOCKER_NETWORK_OPTIONS=-ip=192.168.120.161, but it didn't help (also tried to set it to 0.0.0.0 without any result).

Below is the ETCD ports (2379, 2380) only binding to localhost interface 127.0.0.1:

[root@mslave1 executors]# ss -lnt
State       Recv-Q Send-Q                                             Local Address:Port                                                            Peer Address:Port              
LISTEN      0      64                                                             *:35113                                                                      *:*                  
LISTEN      0      128                                                192.168.120.161:7946                                                                       *:*                  
LISTEN      0      128                                                    127.0.0.1:2379                                                                       *:*                  
LISTEN      0      128                                                    127.0.0.1:7979                                                                       *:*                  
LISTEN      0      128                                                    127.0.0.1:2380                                                                       *:*                  
LISTEN      0      128                                                            *:4750                                                                       *:*                  
LISTEN      0      128                                                            *:111                                                                        *:*       

Any ideas?

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Jan 19, 2017

Member

@akamalov the default configuration of etcd is to listen on localhost only.

docker run --rm akamalov/etcd-docker:3.1.7008 --help

	--listen-peer-urls 'http://localhost:2380'
		list of URLs to listen on for peer traffic.
	--listen-client-urls 'http://localhost:2379'
		list of URLs to listen on for client traffic.

...

	--initial-advertise-peer-urls 'http://localhost:2380'
		list of this member's peer URLs to advertise to the rest of the cluster.
	--initial-cluster 'default=http://localhost:2380'
		initial cluster configuration for bootstrapping.
...
	--advertise-client-urls 'http://localhost:2379'

Just as a quick check, try;

docker run -d -p 2379:2379 --net=host --name etcd akamalov/etcd-docker:3.1.7008 --advertise-client-urls 'http://0.0.0.0:2379' --listen-client-urls 'http://0.0.0.0:2379'

And it should listen on any IP-address

Member

thaJeztah commented Jan 19, 2017

@akamalov the default configuration of etcd is to listen on localhost only.

docker run --rm akamalov/etcd-docker:3.1.7008 --help

	--listen-peer-urls 'http://localhost:2380'
		list of URLs to listen on for peer traffic.
	--listen-client-urls 'http://localhost:2379'
		list of URLs to listen on for client traffic.

...

	--initial-advertise-peer-urls 'http://localhost:2380'
		list of this member's peer URLs to advertise to the rest of the cluster.
	--initial-cluster 'default=http://localhost:2380'
		initial cluster configuration for bootstrapping.
...
	--advertise-client-urls 'http://localhost:2379'

Just as a quick check, try;

docker run -d -p 2379:2379 --net=host --name etcd akamalov/etcd-docker:3.1.7008 --advertise-client-urls 'http://0.0.0.0:2379' --listen-client-urls 'http://0.0.0.0:2379'

And it should listen on any IP-address

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Jan 20, 2017

Member

This issue has become a kitchen sink of many issues related to networking. Some issues reported here are due to misconfiguration of the container, some because of settings on the host or daemon configuration.

I'm going to lock this issue to prevent the discussion diverging further. If you are running into issues, and suspect there's a bug, please open a new issue with details, and the exact steps to reproduce.

Member

thaJeztah commented Jan 20, 2017

This issue has become a kitchen sink of many issues related to networking. Some issues reported here are due to misconfiguration of the container, some because of settings on the host or daemon configuration.

I'm going to lock this issue to prevent the discussion diverging further. If you are running into issues, and suspect there's a bug, please open a new issue with details, and the exact steps to reproduce.

@thaJeztah thaJeztah closed this Jan 20, 2017

@moby moby locked and limited conversation to collaborators Jan 20, 2017

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.