-
Notifications
You must be signed in to change notification settings - Fork 18.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No available IPv4 addresses on this network's address pools: bridge #18527
Comments
Hi! Please read this important information about creating issues. If you are reporting a new issue, make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead. If you suspect your issue is a bug, please edit your issue description to include the BUG REPORT INFORMATION shown below. If you fail to provide this information within 7 days, we cannot debug your issue and will close it. We will, however, reopen it if you later provide the information. This is an automated, informational response. Thank you. For more information about reporting issues, see https://github.com/docker/docker/blob/master/CONTRIBUTING.md#reporting-other-issues BUG REPORT INFORMATIONUse the commands below to provide key information from your environment:
Provide additional environment details (AWS, VirtualBox, physical, etc.): List the steps to reproduce the issue: Describe the results you received: Describe the results you expected: Provide additional info you think is important: ----------END REPORT --------- #ENEEDMOREINFO |
More information about current network |
List of allocated IP according to
Definitely not all of them |
Are you using docker-in-docker or running more than one daemon? If you do, this may be a duplicate of #17939 |
No I'm not doing that, I've seen the issue, but have not detected any similarity |
ping @aboch (sorry I'm kinda loosing track which network issues are still being tracked) 😄 |
@Soulou |
The system is not in the state but I can reproduce it super easyly @aboch output is coming. |
Output is attached to the post |
@Soulou |
ping @aboch did you have time to look at the output @Soulou provided? #18527 (comment) |
@feisuzhu No way to reproduce on another host. The host which has this problem is still running and I can reproduce it. (I just have to run a few more containers until hitting this weird limit |
@mavenugo What I can tell is that it is not the same issue hit in #18113 (comment) @Soulou In order to debug more deeper what is going on, would it be ok for you to send me (privately) your full |
I have the same issue on my production server that has occurred twice already. The number of running containers was 216 and it wasn't possible to start a new one. I was able to remove some of running containers and the issue gone. |
@psviderski @Soulou |
@aboch I've just sent you a letter with the db file. Please let me know if I need to provide something else that probably can help you. |
@aboch I had to reset the node (as it was one of our production servers) and I forgot to backup the db file :-( If the situation appears again, I'll send it to you directly! |
I believe I'm having the same issue: Command:
Docker log:
System info:
@aboch I'll send you the full local-kv.db on your email. EDIT: This is the first time I'm seeing this issue in my cluster. Unfortunately, it's blocking container creation on the node so I'll have to "fix" it by reducing the amount of containers. Unclear whether I can recreate it or not. FWIW, I'm running 122 containers only on the node whereas I have identical nodes running up to 250 containers without this issue appearing (yet). |
@beetree if it's the same issue as me, the issue will appear again at 250 containers, it was highly reproducible until I've restarted the docker daemon. |
Thanks @beetree for the file. Bug will not always occur,only some address release/re-allocate patterns will expose the issue. As you found, you may be able to keep it working decreasing the number of containers. But unfortunately you will hit it again on some n-th container creation, as @Soulou already mentioned. This is now fixed in master. |
Great, can be close then? :) |
I'll close this issue, as it should be resolved on master, and will be in 1.10, but feel free to comment if you're still able to reproduce this on the current master, or in the 1.10 release |
@Soulou @psviderski |
Thanks @aboch and @thaJeztah for helping us out with this. Knowing that this gets fixed in 1.10 gives a lot of comfort. Keep the improvements coming...! |
@aboch @thaJeztah: Thanks! 👍 |
Restarting the daemon solves this, but restarting the daemon can result in the error UPDATE: Deleting /var/lib/docker/network seems to actually not solve it. However, deleting everything in /var/lib/docker does solve it. /beetree |
@aboch @thaJeztah I don't see any mentioning of this having been fixed in https://github.com/docker/docker/releases/tag/v1.10.0-rc1 Are you sure it is in? |
@beetree yes. this is resolved. |
@mavenugo perhaps we can add a mention of "stability improvements to ....", open to suggestions if you have them |
I had the same issue, with only 10 machines running. Fixed it by removing the network and creating it again. |
@jtangelder how exactly did you "remove the network and created it again"? (I have the same issue) |
Something with |
@aboch Could you point me to the commit in libnetwork which fixed this? |
Hi there, one of my production host encountered the following error when starting a new container (creation when well, POST /start request failed):
The bridge network is the default one giving IPs on 172.17.0.0/16
There are not so many containers on the host (70, see docker info output below) So I suppose there are plenty of available addresses.
In the complete log I've a warning about umount ipc, but I suppose it's related to the cleaning of the container start after the IP allocation error.
What could be the source of that? Thank you very much.
Complete error log
docker version
docker info
The text was updated successfully, but these errors were encountered: