-
Notifications
You must be signed in to change notification settings - Fork 18.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Exposed ports are only accessible with --net=host #13914
Comments
Hi! Please read this important information about creating issues. If you are reporting a new issue, make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead. If you suspect your issue is a bug, please edit your issue description to include the BUG REPORT INFORMATION shown below. If you fail to provide this information within 7 days, we cannot debug your issue and will close it. We will, however, reopen it if you later provide the information. This is an automated, informational response. Thank you. For more information about reporting issues, see https://github.com/docker/docker/blob/master/CONTRIBUTING.md#reporting-other-issues BUG REPORT INFORMATIONUse the commands below to provide key information from your environment:
Provide additional environment details (AWS, VirtualBox, physical, etc.): List the steps to reproduce the issue: Describe the results you received: Describe the results you expected: Provide additional info you think is important: ----------END REPORT --------- #ENEEDMOREINFO |
1 Do you set |
|
I'm also facing this problem, just today after rebooting. Docker file:
Built as:
executing as:
exposes ports:
But I cannot connect to mongodb on But when I:
(note the added
(note that 27017 is now bound to mongodb and not docker-proxy) then it works as expected. Up until yesterday it was working without it. Where do I start looking what has changed? I am sure I've run some system upgrades (running on Arch) but is it something from Docker that changed? Bug Report InfoDocker Version:
Docker Info:
uname -a:
|
@Jaykah @stratosgear At least with docker 1.8.1, If you follow @coolljt0725 advice ( restarting the daemon with In the netstat o/p you would see the expose port binded to the ipv4 address you specified. It seems to me the real issue is that if |
Being a little more specific: Adding BTW, I'm running Arch, no idea where changes to the Docker daemon should take place in other distros |
Need to rectify my comment, the netstat ipv6 binding notation does not seem to be an issue as per comments in docker/issues/2174 |
I think this is the way you set the
|
I'm running into this as well, after upgrading to The strange thing is that on two of my hosts it works fine, but on another it's having this issue. (My app runs on 8080) On a good host:
On a bad host:
|
I've tried to fix this a few ways - one is using |
So I think I've found at least partially what is causing this, though not what is causing what is causing this. It looks like the issue has to do with poor old NAT rules not being removed in some case. My current guess is that it's when the docker daemon is restarted, though I haven't yet been able to verify that. Anyway, I downgraded my docker to docker 1.7.1 (which was not super simple with ubuntu packages - had to go back to lxc-docker from the old Ubuntu repo) and I still saw this problem popping up. That's when I started digging into how docker manages to move the traffic to the container when a port is exposed. On a host where the container was running (and exporting to port 8080 on the host) where I couldn't connect, I took a look and saw this in the NAT table:
I took a look in
Here's some more context around the first of those failures:
Anyway, I'm going to keep trying to dig into this as this is a huge problem for us and I don't yet have a solution. Here's some more info about the environment:
|
Just wanted to say bravo for such deep investigative work and acknowledge the effort you put into that...! |
So I just realized that we have been running docker Editing to add: I've since done over 40 host rebuilds, and not a single instance of this issue since moving to |
Just rolled forward again to
|
Just to say that the same issue was present already on 1.6.2. We solved the issue by deleting the stale iptables rules. Last time it happened after we had an error such as the one explained in #8539. But since this happened other times simply by stopping, removing and recreating the container (through compose), I don't really know whether the two issues are really related. |
ping @aboch. can you PTAL at this ? |
@Teudimundo How do you actually go about deleting the stale iptables rules? I'm still having issues with this, well, issue... Thanks! |
@stratosgear I followed the istructions of @phobologic. You look for the port you are trying to expose and you will find rules forwarding that port to an ip of a container that is no longer running. You need to delete that (or those rules). Using |
It seems like you should be able to work around this with a cronjob that checks the DOCKER chain in iptables for duplicates and if found uses the Docker API to figure out what the current IP for the port is and remove the bad ones. Right now we're seeing tons of orphaned rules in our iptables chains; only for the services with a fixed port does it actually cause an observable problem, but the issue seems to happen all the time. |
Seeing similar problems here as well with Docker 1.7.1 on CentOS 7 (iptables version 1.4.21). |
If docker changed to add rules to the I've written a little tool that I'm going to try to run in a cronjob every minute in production: https://gist.github.com/glasser/0486d98073ce15f38b9d |
After docker 1.7.0 we (Remind) have seen this issue pop up: moby/moby#13914 Now that this is closed: moby/moby#16001 We can use the new docker-engine packages, but with an old version. I'm moving the empire_ami back to docker 1.7.0 till 13914 above is fixed. As well, this should fix ECS stats - we just needed a bunch of volumes, per: aws/amazon-ecs-agent#174 Doing this internally @ Remind fixed this.
Yes I have same problem with 1.9.0 |
I'm having the same problem: ☃ docker -v ☃ docker info ☃ uname -a |
In case it's helpful anyone: I just ran into this problem with 1.10.3 and restarting the docker daemon corrected it.
|
I just had the exact same problem with 1.10.3, and it means that the final part of the introductory tutorial was broken for me. I was able to get it working by adding |
I am having the same issue using 1.10.3 on ubuntu 14.04. The only way I can access exposed ports is when I start the container using --net=host. I have tried adding --ip=0.0.0.0 to DOCKER_OPTS in /etc/default/docker and restarting the daemon but that does not work. Anyone else able to make it work on ubuntu? |
I modified /etc/stsctl.conf and added/uncommented net.ipv6.conf.all.forwarding=1 After rebooting the ports are accessible with the bridge network. However if I use VPN then I am back to being unable to access the ports. Any suggestions? |
I switched to OpenConnect from Cisco anyConnect and it all works now. |
If you want your container ports to bind on your ipv4 addresse, just :
works for me on docker 1.9.1 |
Any clue where this might be in debian? I tried adding it to
and also as a part of DOCKER_OPTS:
I've recently upgraded to 1.11.1:
Still I get containers that listen on IPv6 only, adding |
@joshuacox what version of debian are you running? If it's a version that uses systemd, then |
I'm seeing the same issue on Ubuntu 14.04. The --ip flag doesn't change anything the --net=host option allows connections to the mapped ports. $ docker -v |
Docker version 1.11.2, build b9f10c9 |
Ok, same problem here on Ubuntu 14.04. No leftover iptables rules. Works fine on another machine with docker 0.10.3. ubuntu@ip-172-31-0-235: |
I am having the same problem. I am trying to dockerize HDFS. HDFS has 2 components: namenode and datanode. NameNode is roughly the "master". Datanode is the "worker". The Namenode is responsible for manaing one or more datanodes I stood up a Datanode and linked the namenode to the datanode. The datanode isn;t able to register itself to the namenode. When I telnet into 8020 on the NameNode from the DataNode, I see the same problem. The connection drops as soon as it connects |
@GaalDornick When you are on the host and you telnet to the exposed port, it goes through a proxy process, which is probably why telnet is connecting. To me it sounds like nothing is listening on 8020, or perhaps more correct, nothing is listening on 8020 on eth0 in the container where the forwarded port should be going to. |
@cpuguy83 When I create a bash shell in the container and telnet to localhost 8020, it connects. It's a standard hadoop name node. It listens on both ports 50070 and 8020. It's not logging any errors. I've never seen a case where Namenode listens on one port but not the other. If NameNode is not able to listen on 8020, I would expect it to log errors or stop |
@GaalDornick That's my point. |
Ahh I see. How do I check where is hadoop listening on port 8020. Normally i run I do see something in the Hadoop logs. When it listens to 50070, it logs this
When it listens on 8020, it logs this
The one that works binds to 0.0.0.0, and the one that doesn't binds to 127.0.0.1. Could this cause the problem? |
@GaalDornick Most likely that's it. |
YES!! That worked. Thanks @cpuguy83 Just in case someone else is facing this problem Configure your application to listen to 0.0.0.0 not 127.0.0.1 I bet Docker's documentation probably spells this out somewhere and I missed it |
Same trouble here. I'm running docker version 1.12.3 build 6b644ec. Do not have any special configurations and nat table is empty. |
@valtoni please open a new issue with at least the output of |
+1. I had the same problem. |
Greetings, Running the following enviornment:
Trying to deploy ETCD cluster in HOST mode:
Problem: Deployed container only binds to interface Below is the ETCD ports (
Any ideas? |
@akamalov the default configuration of etcd is to listen on localhost only.
Just as a quick check, try;
And it should listen on any IP-address |
This issue has become a kitchen sink of many issues related to networking. Some issues reported here are due to misconfiguration of the container, some because of settings on the host or daemon configuration. I'm going to lock this issue to prevent the discussion diverging further. If you are running into issues, and suspect there's a bug, please open a new issue with details, and the exact steps to reproduce. |
I have encountered a strange issue that does not allow me to connect to an exposed container port:
The port is there and listening
Telnet is able to connect, but only to localhost
Iptables:
However, if I run docker with --net=host, I am successfully able to connect to the port.
Linux Logstash 3.13.0-54-generic #91-Ubuntu SMP Tue May 26 19:15:08 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux hosted on Azure
The text was updated successfully, but these errors were encountered: