New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"No route to host" after reboot , docker overlay network issue #23270
Comments
|
Few more informations after further debugging:
So I may assume that the communication problem is really limited to the existing containers.. |
|
Another #25266 relative? |
|
Hey, I'm experiencing the same problems as @michaljrk . I can confirm that it's NOT related to #25266 . If I connect to container a. - do a ping to container b. it says "Host not reachable". Then I restart container b. Now my ping reaches container b. But only for either about 2-5 minutes or till I try to put anything else than a ping on the connection. I've the following additional information to this problem.
All servers have similar stats and the same etcd, docker and OS Version and update state. docker version Server: docker info My current workaround is to disable auto-reboot. If there's a kernel update I stop all containers, remove all overlay networks and re-setup the entire thing. Ansible is great for such a job, but this is still kind of a big problem and not really salable. Any ideas what could cause the issue? Any additional information I can provide? Thank you very much! |
|
I was hoping that it will be resolved in 1.12 but apparently it's not. |
|
/cc @sanimej |
|
I had hoped that the update to docker 1.12.5 might have fixed the problem, but I've just ran into the same troubles with the new version. Kind regards, |
|
@michaljrk @anpieber If you have been seeing this issue randomly but only after a host or daemon restart on one of the nodes this might address it. |
|
Thanks for the hint @sanimej - how can I find out which docker version contains the libnetwork release, that I can test it. Thank you very much in advance! |
|
I am having same issue with the rebooted host. Has this been fixed? |
|
I have the same problem. A fix would be very nice indeed. |
|
same here |
|
It's a stupid hotfix, but since I keep forgetting this bug when I reboot my docker machine I'm using this cron job: Seems to work well enough. |
|
Have the same problem. Only stable workaround is restarting docker. |
There are 3 virtual machines running docker, hosted on 3 separated physical boxes.
The docker overlay network is set up between them to communicate 3 mongo docker containers (mongo,mongo2,mongo3 - replica set).
Everything worked as expected for a while, but after restarting physical host for mongo3, the container has lost communication with mongo and mongo2 over the docker overlay network.
I'm still able to see the network when running
docker network ls, and the mongo3 is still connected to it. I've tried to reconnect the container to the network, restart it, restart docker daemon, hosting server, nothing helps..iptables rules look fine, traffic for
7946and4789port is permitted.Any ideas?
Environment:
docker version:
Client:
Version: 1.10.2
API version: 1.22
Go version: go1.5.3
Git commit: c3959b1
Built: Mon Feb 22 21:37:01 2016
OS/Arch: linux/amd64
Server:
Version: 1.10.2
API version: 1.22
Go version: go1.5.3
Git commit: c3959b1
Built: Mon Feb 22 21:37:01 2016
OS/Arch: linux/amd64
docker info:
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 1
Server Version: 1.10.2
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 21
Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
Volume: local
Network: null host overlay bridge
Kernel Version: 3.13.0-86-generic
Operating System: Ubuntu 14.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.861 GiB
Name: mongo3host
ID: I5GA:KJFZ:27OF:GG2C:QYLO:UX5J:ZCKU:IWFC:5LAH:K6IL:JXAS:A5SM
WARNING: No swap limit support
Cluster store: etcd://20.X.X.180:12379
Cluster advertise: 20.X.X.186:2375
The text was updated successfully, but these errors were encountered: