New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot start container: Port has already been allocated #6476

Closed
discordianfish opened this Issue Jun 17, 2014 · 79 comments

Comments

Projects
None yet
@discordianfish
Contributor

discordianfish commented Jun 17, 2014

Hi,

docker 1.0.0 with aufs on 14.04 here.
I haven't found a way to reproduce it, but since upgrade to 1.0.0 I quite often get the error "port has already been allocated" when starting a container that was stopped earlier:

$ docker start n1
Error: Cannot start container n1: port has already been allocated
2014/06/17 13:07:09 Error: failed to start one or more containers

$ docker inspect n1|jq .[0].State.Running
false

$ docker inspect n1|jq .[0].HostConfig.PortBindings
{
  "7001/tcp": [
    {
      "HostPort": "7001",
      "HostIp": ""
    }
  ],
  "4001/tcp": [
    {
      "HostPort": "4001",
      "HostIp": ""
    }
  ],
  "10250/tcp": [
    {
      "HostPort": "10250",
      "HostIp": ""
    }
  ]
}

$ sudo netstat -lnp | egrep '10250|4001|7001'
$ sudo lsof -n | egrep '10250|4001|7001'

And the processes aren't running either:

$ ps -ef|grep etc[d]
$ ps -ef|grep kubele[t]
@discordianfish

This comment has been minimized.

Show comment
Hide comment
@discordianfish

discordianfish Jun 17, 2014

Contributor

PS: The only way to recover is restarting docker after which everything works as expected.

Contributor

discordianfish commented Jun 17, 2014

PS: The only way to recover is restarting docker after which everything works as expected.

@shykes shykes added this to the 1.0.1 milestone Jun 17, 2014

@shykes

This comment has been minimized.

Show comment
Hide comment
@shykes

shykes Jun 17, 2014

Collaborator

Tentatively adding to 1.0.1 milestone (cc @vieux @crosbymichael @tibor @unclejack )

Collaborator

shykes commented Jun 17, 2014

Tentatively adding to 1.0.1 milestone (cc @vieux @crosbymichael @tibor @unclejack )

@LK4D4

This comment has been minimized.

Show comment
Hide comment
@LK4D4

LK4D4 Jun 17, 2014

Contributor

I see here, that if there is error during port allocations - already allocated ports never released. Maybe it is problem.

Contributor

LK4D4 commented Jun 17, 2014

I see here, that if there is error during port allocations - already allocated ports never released. Maybe it is problem.

@crosbymichael

This comment has been minimized.

Show comment
Hide comment
@crosbymichael

crosbymichael Jun 18, 2014

Contributor

@LK4D4 maybe we are hitting an error when we stop the container and the port is not being released?

Contributor

crosbymichael commented Jun 18, 2014

@LK4D4 maybe we are hitting an error when we stop the container and the port is not being released?

@LK4D4

This comment has been minimized.

Show comment
Hide comment
@LK4D4

LK4D4 Jun 18, 2014

Contributor

@crosbymichael Yup, you are right. Then we should see errors in log on stop.
@discordianfish Can you pls record some logs?
Also I've discovered that ReleasePort never returns an error :)

Contributor

LK4D4 commented Jun 18, 2014

@crosbymichael Yup, you are right. Then we should see errors in log on stop.
@discordianfish Can you pls record some logs?
Also I've discovered that ReleasePort never returns an error :)

@crosbymichael

This comment has been minimized.

Show comment
Hide comment
@crosbymichael

crosbymichael Jun 18, 2014

Contributor

@LK4D4 are you looking into this issue or do you want me to take it?

Contributor

crosbymichael commented Jun 18, 2014

@LK4D4 are you looking into this issue or do you want me to take it?

@LK4D4

This comment has been minimized.

Show comment
Hide comment
@LK4D4

LK4D4 Jun 18, 2014

Contributor

@crosbymichael I almost asleep :) And have no progress on this issue, so feel free to take it.

Contributor

LK4D4 commented Jun 18, 2014

@crosbymichael I almost asleep :) And have no progress on this issue, so feel free to take it.

@discordianfish

This comment has been minimized.

Show comment
Hide comment
@discordianfish

discordianfish Jun 19, 2014

Contributor

I'll provide the logs as soon as I run into that again. But haven't so far..

Contributor

discordianfish commented Jun 19, 2014

I'll provide the logs as soon as I run into that again. But haven't so far..

@kruxik

This comment has been minimized.

Show comment
Hide comment
@kruxik

kruxik Jun 19, 2014

The issue can be reproduced easily. Whenever you try to allocate a port to a port which is already occupied, Docker complains since then even the port becomes free.

Tested with Docker 1.0 on Ubuntu 14.04

How to reproduce:

$ sudo service apache2 start #occupies port 80
$ sudo docker run -p 80:80 -i -t ubuntu:14.04 /bin/bash # first try

2014/06/19 15:53:59 Error: Cannot start container 2094c72e4485bd9f54e7f3f8de797845d6d8a43db37fd2f4f8231222e4bf377e: port has already been allocated

$ sudo service apache2 stop # frees port 80, can be verified by nmap
$ sudo docker run -p 80:80 -i -t ubuntu:14.04 /bin/bash # second try

2014/06/19 15:53:59 Error: Cannot start container 2094c72e4485bd9f54e7f3f8de797845d6d8a43db37fd2f4f8231222e4bf377e: port has already been

$ sudo service docker restart
$ docker run -p 80:80 -i -t ubuntu:14.04 /bin/bash # now it works
root@cc4847a4c37d:/#

kruxik commented Jun 19, 2014

The issue can be reproduced easily. Whenever you try to allocate a port to a port which is already occupied, Docker complains since then even the port becomes free.

Tested with Docker 1.0 on Ubuntu 14.04

How to reproduce:

$ sudo service apache2 start #occupies port 80
$ sudo docker run -p 80:80 -i -t ubuntu:14.04 /bin/bash # first try

2014/06/19 15:53:59 Error: Cannot start container 2094c72e4485bd9f54e7f3f8de797845d6d8a43db37fd2f4f8231222e4bf377e: port has already been allocated

$ sudo service apache2 stop # frees port 80, can be verified by nmap
$ sudo docker run -p 80:80 -i -t ubuntu:14.04 /bin/bash # second try

2014/06/19 15:53:59 Error: Cannot start container 2094c72e4485bd9f54e7f3f8de797845d6d8a43db37fd2f4f8231222e4bf377e: port has already been

$ sudo service docker restart
$ docker run -p 80:80 -i -t ubuntu:14.04 /bin/bash # now it works
root@cc4847a4c37d:/#

@LK4D4

This comment has been minimized.

Show comment
Hide comment
@LK4D4

LK4D4 Jun 19, 2014

Contributor

@kruxik Wow, thank you

Contributor

LK4D4 commented Jun 19, 2014

@kruxik Wow, thank you

@LK4D4

This comment has been minimized.

Show comment
Hide comment
@LK4D4

LK4D4 Jun 19, 2014

Contributor

In bridge/driver.go in AllocatePort function we try to allocate port first time with success, then we try to map port without success, then we try to allocate same port second time - here we get port has already been allocated and exit. Port stays allocated forever.
@crosbymichael hope that I've helped :) Also I can try to fix this tomorrow if you have no time.

Contributor

LK4D4 commented Jun 19, 2014

In bridge/driver.go in AllocatePort function we try to allocate port first time with success, then we try to map port without success, then we try to allocate same port second time - here we get port has already been allocated and exit. Port stays allocated forever.
@crosbymichael hope that I've helped :) Also I can try to fix this tomorrow if you have no time.

icecrime added a commit to icecrime/docker that referenced this issue Jun 19, 2014

Restrict portallocator to Docker allocated ports
Port allocation status is stored in a global map: a port detected in use will remain as such for the lifetime of the daemon. Change the behavior to only mark as allocated ports which are claimed by Docker itself (which we can trust to properly remove from the allocation map once released). Ports allocated by other applications will always be retried to account for the eventually of the port having been released.

Fixes #6476.

Docker-DCO-1.1-Signed-off-by: Arnaud Porterie <icecrime@gmail.com> (github: icecrime)

icecrime added a commit to icecrime/docker that referenced this issue Jun 19, 2014

Restrict portallocator to Docker allocated ports
Port allocation status is stored in a global map: a port detected in use will remain as such for the lifetime of the daemon. Change the behavior to only mark as allocated ports which are claimed by Docker itself (which we can trust to properly remove from the allocation map once released). Ports allocated by other applications will always be retried to account for the eventually of the port having been released.

Fixes #6476.

Docker-DCO-1.1-Signed-off-by: Arnaud Porterie <icecrime@gmail.com> (github: icecrime)
@kruxik

This comment has been minimized.

Show comment
Hide comment
@kruxik

kruxik Jun 20, 2014

Just verified, the issue is still in Docker 1.0.1

kruxik commented Jun 20, 2014

Just verified, the issue is still in Docker 1.0.1

@LK4D4

This comment has been minimized.

Show comment
Hide comment
@LK4D4

LK4D4 Jun 20, 2014

Contributor

@kruxik yup, it will be fixed by #6555

Contributor

LK4D4 commented Jun 20, 2014

@kruxik yup, it will be fixed by #6555

@discordianfish

This comment has been minimized.

Show comment
Hide comment
@discordianfish

discordianfish Jun 20, 2014

Contributor

@shykes I'm quite disappointed that this hasn't been fixed in 1.0.1. This is a severe bug for every production deployment.

Contributor

discordianfish commented Jun 20, 2014

@shykes I'm quite disappointed that this hasn't been fixed in 1.0.1. This is a severe bug for every production deployment.

@vieux vieux modified the milestones: 1.0.2, 1.0.1 Jun 20, 2014

@soichih

This comment has been minimized.

Show comment
Hide comment
@soichih

soichih Jun 20, 2014

I am having the same issue.. so I am subscribing to this thread.

soichih commented Jun 20, 2014

I am having the same issue.. so I am subscribing to this thread.

@erikh

This comment has been minimized.

Show comment
Hide comment
@erikh

erikh Jun 23, 2014

Contributor

@LK4D4 is this fully resolved by your patches? I was going to see if I could resolve this.

Contributor

erikh commented Jun 23, 2014

@LK4D4 is this fully resolved by your patches? I was going to see if I could resolve this.

@LK4D4

This comment has been minimized.

Show comment
Hide comment
@LK4D4

LK4D4 Jun 23, 2014

Contributor

@erikh this is @icecrime patches in #6555, and yes them resolves this. I think it needs just a little polish before merge.

Contributor

LK4D4 commented Jun 23, 2014

@erikh this is @icecrime patches in #6555, and yes them resolves this. I think it needs just a little polish before merge.

@danishabdullah

This comment has been minimized.

Show comment
Hide comment
@danishabdullah

danishabdullah Jun 24, 2014

+1...
having same issue

danishabdullah commented Jun 24, 2014

+1...
having same issue

@meetri

This comment has been minimized.

Show comment
Hide comment
@meetri

meetri Jun 24, 2014

+1 ...
me too. Looking forward to a fix. I can't use this in production until this is resolved. :(

is there a known workaround for the time being?

meetri commented Jun 24, 2014

+1 ...
me too. Looking forward to a fix. I can't use this in production until this is resolved. :(

is there a known workaround for the time being?

@llonchj

This comment has been minimized.

Show comment
Hide comment
@llonchj

llonchj commented Jun 25, 2014

+1

@sickill

This comment has been minimized.

Show comment
Hide comment
@sickill

sickill Jun 25, 2014

Just hit the same thing with docker 1.0.0 on a production box...

sickill commented Jun 25, 2014

Just hit the same thing with docker 1.0.0 on a production box...

@aphexddb

This comment has been minimized.

Show comment
Hide comment
@aphexddb

aphexddb Jun 25, 2014

+1 this is a huge roadblock for us

Client version: 1.0.1
Client API version: 1.12
Go version (client): go1.2.1
Git commit (client): 990021a
Server version: 1.0.1
Server API version: 1.12
Go version (server): go1.2.1
Git commit (server): 990021a

aphexddb commented Jun 25, 2014

+1 this is a huge roadblock for us

Client version: 1.0.1
Client API version: 1.12
Go version (client): go1.2.1
Git commit (client): 990021a
Server version: 1.0.1
Server API version: 1.12
Go version (server): go1.2.1
Git commit (server): 990021a
@rafikk

This comment has been minimized.

Show comment
Hide comment
@rafikk

rafikk Jun 27, 2014

Causing significant production problems for us as well. Not sure why this isn't a higher priority. This should be merged and receive an expedited release, IMO.

rafikk commented Jun 27, 2014

Causing significant production problems for us as well. Not sure why this isn't a higher priority. This should be merged and receive an expedited release, IMO.

@LK4D4

This comment has been minimized.

Show comment
Hide comment
@LK4D4

LK4D4 Jun 27, 2014

Contributor

Guys, you can help with this by testing patch from #6682

Contributor

LK4D4 commented Jun 27, 2014

Guys, you can help with this by testing patch from #6682

@crosbymichael

This comment has been minimized.

Show comment
Hide comment
@crosbymichael

crosbymichael Jun 27, 2014

Contributor

@discordianfish do you have a simple way to reproduce?

Contributor

crosbymichael commented Jun 27, 2014

@discordianfish do you have a simple way to reproduce?

@davidshq

This comment has been minimized.

Show comment
Hide comment
@davidshq

davidshq Jun 28, 2014

+1 same issue.

davidshq commented Jun 28, 2014

+1 same issue.

@rafikk

This comment has been minimized.

Show comment
Hide comment
@rafikk

rafikk commented Jun 28, 2014

@crosbymichael, see comment above to reproduce.

@davidshq

This comment has been minimized.

Show comment
Hide comment
@davidshq

davidshq Jun 28, 2014

Are there any workarounds besides applying the patch? I mean, as an end user, not seeking to compile docker myself...

davidshq commented Jun 28, 2014

Are there any workarounds besides applying the patch? I mean, as an end user, not seeking to compile docker myself...

@erikh

This comment has been minimized.

Show comment
Hide comment
@erikh

erikh Jun 28, 2014

Contributor

confirmed the patch #6682 fixes the issue according to the reproduction steps above.

/cc @crosbymichael @vieux @LK4D4

Contributor

erikh commented Jun 28, 2014

confirmed the patch #6682 fixes the issue according to the reproduction steps above.

/cc @crosbymichael @vieux @LK4D4

@unclejack

This comment has been minimized.

Show comment
Hide comment
@unclejack

unclejack Sep 11, 2014

Contributor

@falzm That kernel is unsupported. Please upgrade to kernel 3.8 or newer.

Contributor

unclejack commented Sep 11, 2014

@falzm That kernel is unsupported. Please upgrade to kernel 3.8 or newer.

@falzm

This comment has been minimized.

Show comment
Hide comment
@falzm

falzm Sep 11, 2014

OK @unclejack, will try that.

falzm commented Sep 11, 2014

OK @unclejack, will try that.

@chriswessels

This comment has been minimized.

Show comment
Hide comment
@chriswessels

chriswessels Sep 11, 2014

I'm having the same problem running on CoreOS 4.10 stable:

core@ip-10-0-2-148 ~ $ docker restart b8b489ea73ff
Error response from daemon: Cannot restart container b8b489ea73ff: Bind for 0.0.0.0:443 failed: port is already allocated
2014/09/11 22:25:40 Error: failed to restart one or more containers

docker version

core@ip-10-0-2-148 ~ $ docker version
Client version: 1.1.2
Client API version: 1.13
Go version (client): go1.2
Git commit (client): d84a070
Server version: 1.1.2
Server API version: 1.13
Go version (server): go1.2
Git commit (server): d84a070

docker info

Containers: 12
Images: 92
Storage Driver: btrfs
Execution Driver: native-0.2
Kernel Version: 3.15.8+
Username: xxx
Registry: [https://index.docker.io/v1/]

uname -a

core@ip-10-0-2-148 ~ $ uname -a
Linux ip-10-0-2-148.us-west-1.compute.internal 3.15.8+ #2 SMP Fri Aug 15 22:29:31 UTC 2014 x86_64 Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz GenuineIntel GNU/Linux

chriswessels commented Sep 11, 2014

I'm having the same problem running on CoreOS 4.10 stable:

core@ip-10-0-2-148 ~ $ docker restart b8b489ea73ff
Error response from daemon: Cannot restart container b8b489ea73ff: Bind for 0.0.0.0:443 failed: port is already allocated
2014/09/11 22:25:40 Error: failed to restart one or more containers

docker version

core@ip-10-0-2-148 ~ $ docker version
Client version: 1.1.2
Client API version: 1.13
Go version (client): go1.2
Git commit (client): d84a070
Server version: 1.1.2
Server API version: 1.13
Go version (server): go1.2
Git commit (server): d84a070

docker info

Containers: 12
Images: 92
Storage Driver: btrfs
Execution Driver: native-0.2
Kernel Version: 3.15.8+
Username: xxx
Registry: [https://index.docker.io/v1/]

uname -a

core@ip-10-0-2-148 ~ $ uname -a
Linux ip-10-0-2-148.us-west-1.compute.internal 3.15.8+ #2 SMP Fri Aug 15 22:29:31 UTC 2014 x86_64 Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz GenuineIntel GNU/Linux
@umputun

This comment has been minimized.

Show comment
Hide comment
@umputun

umputun Sep 11, 2014

To me it looks like this happens only if stopping container takes long time. I guess if application in container doesn't handle SIGTERM (or whatever docker stop sending) properly I get my container and app killed in 10secs, but port still allocated from docker's point of view. The only way to "fix" it is restarting docker daemon.

umputun commented Sep 11, 2014

To me it looks like this happens only if stopping container takes long time. I guess if application in container doesn't handle SIGTERM (or whatever docker stop sending) properly I get my container and app killed in 10secs, but port still allocated from docker's point of view. The only way to "fix" it is restarting docker daemon.

@chriswessels

This comment has been minimized.

Show comment
Hide comment
@chriswessels

chriswessels Sep 11, 2014

That's an interesting theory. My containers are stopping within 5 seconds. I wonder...

chriswessels commented Sep 11, 2014

That's an interesting theory. My containers are stopping within 5 seconds. I wonder...

@LK4D4

This comment has been minimized.

Show comment
Hide comment
@LK4D4

LK4D4 Sep 12, 2014

Contributor

@umputun @chriswessels Thanks for feedback, guys. I think I've fixed this in master and this will be in 1.2.1.
Problem was in race condition between start and stop, so cleanup function wasn't called for container.

Contributor

LK4D4 commented Sep 12, 2014

@umputun @chriswessels Thanks for feedback, guys. I think I've fixed this in master and this will be in 1.2.1.
Problem was in race condition between start and stop, so cleanup function wasn't called for container.

@chriswessels

This comment has been minimized.

Show comment
Hide comment
@chriswessels

chriswessels Sep 12, 2014

Thanks @LK4D4! Do you know when 1.2.1 will be released? This is a critical bug in production.

Does anyone know of a fix for this issue without restarting the docker daemon?

chriswessels commented Sep 12, 2014

Thanks @LK4D4! Do you know when 1.2.1 will be released? This is a critical bug in production.

Does anyone know of a fix for this issue without restarting the docker daemon?

@LK4D4

This comment has been minimized.

Show comment
Hide comment
@LK4D4

LK4D4 Sep 13, 2014

Contributor

@chriswessels Seems like this is impossible without daemon restarting :/ I think 1.2.1 will be released next week.

Contributor

LK4D4 commented Sep 13, 2014

@chriswessels Seems like this is impossible without daemon restarting :/ I think 1.2.1 will be released next week.

@fye

This comment has been minimized.

Show comment
Hide comment
@fye

fye Sep 15, 2014

+1

docker version:
Client version: 0.11.1-dev
Client API version: 1.12
Go version (client): go1.2
Git commit (client): 02d20af/0.11.1
Server version: 0.11.1-dev
Server API version: 1.12
Go version (server): go1.2
Git commit (server): 02d20af/0.11.1

docker start gitlab:
Error: Cannot restart container gitlab: port has already been allocated
2014/09/15 09:24:53 Error: failed to restart one or more containers

fye commented Sep 15, 2014

+1

docker version:
Client version: 0.11.1-dev
Client API version: 1.12
Go version (client): go1.2
Git commit (client): 02d20af/0.11.1
Server version: 0.11.1-dev
Server API version: 1.12
Go version (server): go1.2
Git commit (server): 02d20af/0.11.1

docker start gitlab:
Error: Cannot restart container gitlab: port has already been allocated
2014/09/15 09:24:53 Error: failed to restart one or more containers

@choclo

This comment has been minimized.

Show comment
Hide comment
@choclo

choclo Sep 16, 2014

Also having same issue here:

Client version: 1.1.2
Client API version: 1.13
Go version (client): go1.2.2
Git commit (client): d84a070/1.1.2
Server version: 1.1.2
Server API version: 1.13
Go version (server): go1.2.2
Git commit (server): d84a070/1.1.2

Waiting for v1.2.1 to see if clears up this issue 👍

choclo commented Sep 16, 2014

Also having same issue here:

Client version: 1.1.2
Client API version: 1.13
Go version (client): go1.2.2
Git commit (client): d84a070/1.1.2
Server version: 1.1.2
Server API version: 1.13
Go version (server): go1.2.2
Git commit (server): d84a070/1.1.2

Waiting for v1.2.1 to see if clears up this issue 👍

@i0n

This comment has been minimized.

Show comment
Hide comment
@i0n

i0n Sep 18, 2014

+1

Client version: 1.1.2
Client API version: 1.13
Go version (client): go1.2
Git commit (client): d84a070
Server version: 1.1.2
Server API version: 1.13
Go version (server): go1.2
Git commit (server): d84a070

i0n commented Sep 18, 2014

+1

Client version: 1.1.2
Client API version: 1.13
Go version (client): go1.2
Git commit (client): d84a070
Server version: 1.1.2
Server API version: 1.13
Go version (server): go1.2
Git commit (server): d84a070

@katcipis

This comment has been minimized.

Show comment
Hide comment
@katcipis

katcipis Sep 19, 2014

+1

Client version: 1.0.1
Client API version: 1.12
Go version (client): go1.2.1
Git commit (client): 990021a
Server version: 1.0.1
Server API version: 1.12
Go version (server): go1.2.1
Git commit (server): 990021a

katcipis commented Sep 19, 2014

+1

Client version: 1.0.1
Client API version: 1.12
Go version (client): go1.2.1
Git commit (client): 990021a
Server version: 1.0.1
Server API version: 1.12
Go version (server): go1.2.1
Git commit (server): 990021a

@phemmer

This comment has been minimized.

Show comment
Hide comment
@phemmer

phemmer Sep 19, 2014

Contributor

Just an FYI as clarification for the new people coming to this bug. This has been fixed. It will be part of 1.2.1 which hasn't been released yet.

Contributor

phemmer commented Sep 19, 2014

Just an FYI as clarification for the new people coming to this bug. This has been fixed. It will be part of 1.2.1 which hasn't been released yet.

@luxn0429

This comment has been minimized.

Show comment
Hide comment
@luxn0429

luxn0429 Sep 22, 2014

when will v1.2.1 be released?

luxn0429 commented Sep 22, 2014

when will v1.2.1 be released?

@kriss9

This comment has been minimized.

Show comment
Hide comment
@kriss9

kriss9 Sep 23, 2014

+1
Client version: 1.1.2
Client API version: 1.13
Go version (client): go1.2.2
Git commit (client): d84a070/1.1.2
Server version: 1.1.2
Server API version: 1.13
Go version (server): go1.2.2
Git commit (server): d84a070/1.1.2

kriss9 commented Sep 23, 2014

+1
Client version: 1.1.2
Client API version: 1.13
Go version (client): go1.2.2
Git commit (client): d84a070/1.1.2
Server version: 1.1.2
Server API version: 1.13
Go version (server): go1.2.2
Git commit (server): d84a070/1.1.2

@adamhadani

This comment has been minimized.

Show comment
Hide comment
@adamhadani

adamhadani Oct 9, 2014

@phemmer - any dates for 1.2.1? this is critical bug for us (and for many other folks it seems). few questions -

  1. Any issues tagged for 1.2.1 milestone that could use community help in fixing / QAing?
  2. any 1.2.1 RC thats available for community to test-drive and report feedback on?
  3. if alot of open issues remaining, is it possible split em off into a future release so 1.2.1 can be released as is with this major bug fix?

adamhadani commented Oct 9, 2014

@phemmer - any dates for 1.2.1? this is critical bug for us (and for many other folks it seems). few questions -

  1. Any issues tagged for 1.2.1 milestone that could use community help in fixing / QAing?
  2. any 1.2.1 RC thats available for community to test-drive and report feedback on?
  3. if alot of open issues remaining, is it possible split em off into a future release so 1.2.1 can be released as is with this major bug fix?
@crosbymichael

This comment has been minimized.

Show comment
Hide comment
@crosbymichael

crosbymichael Oct 9, 2014

Contributor

You can test 1.3.0 with this fix here:

#8323

Contributor

crosbymichael commented Oct 9, 2014

You can test 1.3.0 with this fix here:

#8323

@adamhadani

This comment has been minimized.

Show comment
Hide comment
@adamhadani

adamhadani Oct 9, 2014

@crosbymichael - thanks! have you been trying this? any regressions / other major issues already known with this RC to watch out for? will definitely take it for test drive

adamhadani commented Oct 9, 2014

@crosbymichael - thanks! have you been trying this? any regressions / other major issues already known with this RC to watch out for? will definitely take it for test drive

@crosbymichael

This comment has been minimized.

Show comment
Hide comment
@crosbymichael

crosbymichael Oct 9, 2014

Contributor

Well, I would not push the RC to your production servers but this build should be stable for you to help test. If you have any problem please let us know before the release so we can get it fixed and you have a proper release to run in production.

Contributor

crosbymichael commented Oct 9, 2014

Well, I would not push the RC to your production servers but this build should be stable for you to help test. If you have any problem please let us know before the release so we can get it fixed and you have a proper release to run in production.

@adamhadani

This comment has been minimized.

Show comment
Hide comment
@adamhadani

adamhadani Oct 9, 2014

@crosbymichael - Got it thanks, and one last Q before i get into this - Is there an updated CHANGELOG anywhere to look over? would be useful to know what new features / modifications / bug fixes there are in this release before I deploy it here

adamhadani commented Oct 9, 2014

@crosbymichael - Got it thanks, and one last Q before i get into this - Is there an updated CHANGELOG anywhere to look over? would be useful to know what new features / modifications / bug fixes there are in this release before I deploy it here

@crosbymichael

This comment has been minimized.

Show comment
Hide comment
@crosbymichael

crosbymichael Oct 10, 2014

Contributor

The last commit in that branch includes a high level changelog. Docker is very active and there are too many commits in each release to include everything in the changelog. So just git log v1.2.0..

Contributor

crosbymichael commented Oct 10, 2014

The last commit in that branch includes a high level changelog. Docker is very active and there are too many commits in each release to include everything in the changelog. So just git log v1.2.0..

@adamhadani

This comment has been minimized.

Show comment
Hide comment
@adamhadani

adamhadani commented Oct 10, 2014

@crosbymichael - got it, thanks

@kung-foo

This comment has been minimized.

Show comment
Hide comment
@kung-foo

kung-foo Oct 15, 2014

Contributor

+1

docker run --rm -it -p "8000:8080" mahcontainer
2014/10/15 22:17:49 Error response from daemon: Bind for 0.0.0.0:8000 failed: port is already allocated
Client version: 1.3.0
Client API version: 1.15
Go version (client): go1.3.3
Git commit (client): f1e7de2
OS/Arch (client): linux/amd64
Server version: 1.3.0
Server API version: 1.15
Go version (server): go1.3.3
Git commit (server): f1e7de2
Contributor

kung-foo commented Oct 15, 2014

+1

docker run --rm -it -p "8000:8080" mahcontainer
2014/10/15 22:17:49 Error response from daemon: Bind for 0.0.0.0:8000 failed: port is already allocated
Client version: 1.3.0
Client API version: 1.15
Go version (client): go1.3.3
Git commit (client): f1e7de2
OS/Arch (client): linux/amd64
Server version: 1.3.0
Server API version: 1.15
Go version (server): go1.3.3
Git commit (server): f1e7de2
@adamhadani

This comment has been minimized.

Show comment
Hide comment
@adamhadani

adamhadani Oct 16, 2014

@crosbymichael been running that binary RC for few days now, looking good so far. mostly doing login/pull/start/stop/rm flows as part of a deployment process and looks solid, containers restart well and haven't run into the "port already bound" issue so far.

adamhadani commented Oct 16, 2014

@crosbymichael been running that binary RC for few days now, looking good so far. mostly doing login/pull/start/stop/rm flows as part of a deployment process and looks solid, containers restart well and haven't run into the "port already bound" issue so far.

@buzypi

This comment has been minimized.

Show comment
Hide comment
@buzypi

buzypi Nov 12, 2014

Faced this issue in CoreOS - Docker v 1.2.0.

No running or stopped container was claiming the port when I tried to run a new container using a port that was previously used.

buzypi commented Nov 12, 2014

Faced this issue in CoreOS - Docker v 1.2.0.

No running or stopped container was claiming the port when I tried to run a new container using a port that was previously used.

@jessfraz

This comment has been minimized.

Show comment
Hide comment
@jessfraz

jessfraz Nov 12, 2014

Contributor

@buzypi the issue was fixed in 1.3.0, let us know if you are still seeing it after upgrading

Contributor

jessfraz commented Nov 12, 2014

@buzypi the issue was fixed in 1.3.0, let us know if you are still seeing it after upgrading

@soichih

This comment has been minimized.

Show comment
Hide comment
@soichih

soichih Nov 15, 2014

@jfrazelle Great! Do you know when will the fix be released on RHEL6 epel?

Actually, do you recommend compiling / installing Docker directly from docker.com?

soichih commented Nov 15, 2014

@jfrazelle Great! Do you know when will the fix be released on RHEL6 epel?

Actually, do you recommend compiling / installing Docker directly from docker.com?

@jessfraz

This comment has been minimized.

Show comment
Hide comment
@jessfraz

jessfraz Nov 15, 2014

Contributor

you can download just the docker binary if you would like and swap that out
for now, this might be the easiest way.
https://docs.docker.com/installation/binaries/ Unsure when the RHEL package
will be updated.

On Fri, Nov 14, 2014 at 6:38 PM, Soichi Hayashi notifications@github.com
wrote:

@jfrazelle https://github.com/jfrazelle Great! Do you know when will
the fix be released on RHEL6 epel?

Actually, do you recommend compiling / installing Docker directly from
docker.com?


Reply to this email directly or view it on GitHub
#6476 (comment).

Contributor

jessfraz commented Nov 15, 2014

you can download just the docker binary if you would like and swap that out
for now, this might be the easiest way.
https://docs.docker.com/installation/binaries/ Unsure when the RHEL package
will be updated.

On Fri, Nov 14, 2014 at 6:38 PM, Soichi Hayashi notifications@github.com
wrote:

@jfrazelle https://github.com/jfrazelle Great! Do you know when will
the fix be released on RHEL6 epel?

Actually, do you recommend compiling / installing Docker directly from
docker.com?


Reply to this email directly or view it on GitHub
#6476 (comment).

@wkf

This comment has been minimized.

Show comment
Hide comment
@wkf

wkf Dec 2, 2014

@jfrazelle was this actually fixed? It looks like @kung-foo is reporting the issue even in 1.3.0?

wkf commented Dec 2, 2014

@jfrazelle was this actually fixed? It looks like @kung-foo is reporting the issue even in 1.3.0?

@jessfraz

This comment has been minimized.

Show comment
Hide comment
@jessfraz

jessfraz Dec 2, 2014

Contributor

this issue has a lot of noise, @kung-foo if you are having a problem with ports in use on the latest version, please open a new issue, so it gets the attention it deserves.

Contributor

jessfraz commented Dec 2, 2014

this issue has a lot of noise, @kung-foo if you are having a problem with ports in use on the latest version, please open a new issue, so it gets the attention it deserves.

@kung-foo

This comment has been minimized.

Show comment
Hide comment
@kung-foo

kung-foo Dec 2, 2014

Contributor

In my case, I ended up rebuilding all of the containers and the problem was no longer reproducible.

Contributor

kung-foo commented Dec 2, 2014

In my case, I ended up rebuilding all of the containers and the problem was no longer reproducible.

@amalagaura

This comment has been minimized.

Show comment
Hide comment
@amalagaura

amalagaura Jan 30, 2015

I am still seeing this on version 1.4.1. It is unfortunate because I cannot restart docker since it is on production.

amalagaura commented Jan 30, 2015

I am still seeing this on version 1.4.1. It is unfortunate because I cannot restart docker since it is on production.

@LK4D4

This comment has been minimized.

Show comment
Hide comment
@LK4D4

LK4D4 Jan 30, 2015

Contributor

@amalagaura Can you open new issue and describe how you run containers? so we can find reproduction case. Thanks for report!

Contributor

LK4D4 commented Jan 30, 2015

@amalagaura Can you open new issue and describe how you run containers? so we can find reproduction case. Thanks for report!

@devel0

This comment has been minimized.

Show comment
Hide comment
@devel0

devel0 Oct 12, 2016

I have the issue, first of all my netstat -tlp not report any listen docker-proxy on the port I try to bind to ( port 50000 ) and docker fails with follow

root@phy:~# docker run -d -ti -h test -p "50000:22" ubuntu
2869b2513188e12061e8beaa65557e6edc882fd948522d6ae32a60c23a58ea50
docker: Error response from daemon: driver failed programming external connectivity on endpoint sick_kare (c4ce18c4382fdd948059e46a8c98fc29b886b1f55c9556a76ccc980916153eea): Bind for 0.0.0.0:50000 failed: port is already allocated.

/var/log/syslog reports:

Oct 12 21:32:53 phy kernel: [12170.580825] aufs au_opts_verify:1597:dockerd[2793]: dirperm1 breaks the protection by the permission bits on the lower branch
Oct 12 21:32:53 phy kernel: [12170.790935] aufs au_opts_verify:1597:dockerd[2793]: dirperm1 breaks the protection by the permission bits on the lower branch
Oct 12 21:32:53 phy kernel: [12170.903157] aufs au_opts_verify:1597:dockerd[2128]: dirperm1 breaks the protection by the permission bits on the lower branch
Oct 12 21:32:53 phy systemd-udevd[16847]: Could not generate persistent MAC address for veth1a0e9b6: No such file or directory
Oct 12 21:32:53 phy systemd-udevd[16849]: Could not generate persistent MAC address for vetha377cc4: No such file or directory
Oct 12 21:32:53 phy kernel: [12170.905898] device vetha377cc4 entered promiscuous mode
Oct 12 21:32:53 phy kernel: [12170.906016] IPv6: ADDRCONF(NETDEV_UP): vetha377cc4: link is not ready
Oct 12 21:32:53 phy dockerd[2109]: time="2016-10-12T21:32:53.874022431+02:00" level=warning msg="Failed to allocate and map port 50000-50000: Bind for 0.0.0.0:50000 failed: port is already allocated"
Oct 12 21:32:53 phy kernel: [12171.287673] docker0: port 1(vetha377cc4) entered disabled state
Oct 12 21:32:53 phy kernel: [12171.291129] device vetha377cc4 left promiscuous mode
Oct 12 21:32:53 phy kernel: [12171.291132] docker0: port 1(vetha377cc4) entered disabled state
Oct 12 21:32:54 phy dockerd[2109]: time="2016-10-12T21:32:54.204770029+02:00" level=error msg="Handler for POST /v1.24/containers/2869b2513188e12061e8beaa65557e6edc882fd948522d6ae32a60c23a58ea50/start returned error: driver failed programming external connectivity on endpoint sick_kare (c4ce18c4382fdd948059e46a8c98fc29b886b1f55c9556a76ccc980916153eea): Bind for 0.0.0.0:50000 failed: port is already allocated"

while if I try with a different port ( 50001 ) it works
and syslog reports

Oct 12 21:34:33 phy kernel: [12270.923521] aufs au_opts_verify:1597:dockerd[2793]: dirperm1 breaks the protection by the permission bits on the lower branch
Oct 12 21:34:33 phy kernel: [12271.186821] aufs au_opts_verify:1597:dockerd[2793]: dirperm1 breaks the protection by the permission bits on the lower branch
Oct 12 21:34:33 phy systemd-udevd[17015]: Could not generate persistent MAC address for vethc08dcd4: No such file or directory
Oct 12 21:34:33 phy kernel: [12271.290075] aufs au_opts_verify:1597:dockerd[2214]: dirperm1 breaks the protection by the permission bits on the lower branch
Oct 12 21:34:33 phy kernel: [12271.292078] device vethafba9bc entered promiscuous mode
Oct 12 21:34:33 phy kernel: [12271.292193] IPv6: ADDRCONF(NETDEV_UP): vethafba9bc: link is not ready
Oct 12 21:34:33 phy systemd-udevd[17016]: Could not generate persistent MAC address for vethafba9bc: No such file or directory
Oct 12 21:34:34 phy kernel: [12271.841888] eth0: renamed from vethc08dcd4
Oct 12 21:34:34 phy kernel: [12271.861628] IPv6: ADDRCONF(NETDEV_CHANGE): vethafba9bc: link becomes ready
Oct 12 21:34:34 phy kernel: [12271.861646] docker0: port 1(vethafba9bc) entered forwarding state
Oct 12 21:34:34 phy kernel: [12271.861653] docker0: port 1(vethafba9bc) entered forwarding state
Oct 12 21:34:49 phy kernel: [12286.910151] docker0: port 1(vethafba9bc) entered forwarding state

I think something stuck when I tried to create a 2th layer using port map 50000 with my previous tests and now I cannot use that port any more neither it is really free in the os.

any suggestion, workarounds ?

devel0 commented Oct 12, 2016

I have the issue, first of all my netstat -tlp not report any listen docker-proxy on the port I try to bind to ( port 50000 ) and docker fails with follow

root@phy:~# docker run -d -ti -h test -p "50000:22" ubuntu
2869b2513188e12061e8beaa65557e6edc882fd948522d6ae32a60c23a58ea50
docker: Error response from daemon: driver failed programming external connectivity on endpoint sick_kare (c4ce18c4382fdd948059e46a8c98fc29b886b1f55c9556a76ccc980916153eea): Bind for 0.0.0.0:50000 failed: port is already allocated.

/var/log/syslog reports:

Oct 12 21:32:53 phy kernel: [12170.580825] aufs au_opts_verify:1597:dockerd[2793]: dirperm1 breaks the protection by the permission bits on the lower branch
Oct 12 21:32:53 phy kernel: [12170.790935] aufs au_opts_verify:1597:dockerd[2793]: dirperm1 breaks the protection by the permission bits on the lower branch
Oct 12 21:32:53 phy kernel: [12170.903157] aufs au_opts_verify:1597:dockerd[2128]: dirperm1 breaks the protection by the permission bits on the lower branch
Oct 12 21:32:53 phy systemd-udevd[16847]: Could not generate persistent MAC address for veth1a0e9b6: No such file or directory
Oct 12 21:32:53 phy systemd-udevd[16849]: Could not generate persistent MAC address for vetha377cc4: No such file or directory
Oct 12 21:32:53 phy kernel: [12170.905898] device vetha377cc4 entered promiscuous mode
Oct 12 21:32:53 phy kernel: [12170.906016] IPv6: ADDRCONF(NETDEV_UP): vetha377cc4: link is not ready
Oct 12 21:32:53 phy dockerd[2109]: time="2016-10-12T21:32:53.874022431+02:00" level=warning msg="Failed to allocate and map port 50000-50000: Bind for 0.0.0.0:50000 failed: port is already allocated"
Oct 12 21:32:53 phy kernel: [12171.287673] docker0: port 1(vetha377cc4) entered disabled state
Oct 12 21:32:53 phy kernel: [12171.291129] device vetha377cc4 left promiscuous mode
Oct 12 21:32:53 phy kernel: [12171.291132] docker0: port 1(vetha377cc4) entered disabled state
Oct 12 21:32:54 phy dockerd[2109]: time="2016-10-12T21:32:54.204770029+02:00" level=error msg="Handler for POST /v1.24/containers/2869b2513188e12061e8beaa65557e6edc882fd948522d6ae32a60c23a58ea50/start returned error: driver failed programming external connectivity on endpoint sick_kare (c4ce18c4382fdd948059e46a8c98fc29b886b1f55c9556a76ccc980916153eea): Bind for 0.0.0.0:50000 failed: port is already allocated"

while if I try with a different port ( 50001 ) it works
and syslog reports

Oct 12 21:34:33 phy kernel: [12270.923521] aufs au_opts_verify:1597:dockerd[2793]: dirperm1 breaks the protection by the permission bits on the lower branch
Oct 12 21:34:33 phy kernel: [12271.186821] aufs au_opts_verify:1597:dockerd[2793]: dirperm1 breaks the protection by the permission bits on the lower branch
Oct 12 21:34:33 phy systemd-udevd[17015]: Could not generate persistent MAC address for vethc08dcd4: No such file or directory
Oct 12 21:34:33 phy kernel: [12271.290075] aufs au_opts_verify:1597:dockerd[2214]: dirperm1 breaks the protection by the permission bits on the lower branch
Oct 12 21:34:33 phy kernel: [12271.292078] device vethafba9bc entered promiscuous mode
Oct 12 21:34:33 phy kernel: [12271.292193] IPv6: ADDRCONF(NETDEV_UP): vethafba9bc: link is not ready
Oct 12 21:34:33 phy systemd-udevd[17016]: Could not generate persistent MAC address for vethafba9bc: No such file or directory
Oct 12 21:34:34 phy kernel: [12271.841888] eth0: renamed from vethc08dcd4
Oct 12 21:34:34 phy kernel: [12271.861628] IPv6: ADDRCONF(NETDEV_CHANGE): vethafba9bc: link becomes ready
Oct 12 21:34:34 phy kernel: [12271.861646] docker0: port 1(vethafba9bc) entered forwarding state
Oct 12 21:34:34 phy kernel: [12271.861653] docker0: port 1(vethafba9bc) entered forwarding state
Oct 12 21:34:49 phy kernel: [12286.910151] docker0: port 1(vethafba9bc) entered forwarding state

I think something stuck when I tried to create a 2th layer using port map 50000 with my previous tests and now I cannot use that port any more neither it is really free in the os.

any suggestion, workarounds ?

@aboch

This comment has been minimized.

Show comment
Hide comment
@aboch

aboch Oct 12, 2016

Contributor

@devel0
workaround: remove the file /var/lib/docker/network/files/local-kv.db then restart the daemon.
(Be aware you'll have to recreate your networks)

Also, this problem has been reported in more recent issues, and it is liley not related to this old GH issue which is in closed state and it is related to docker 1.0.0.

Feel free to update anyof the existing issue with your docker version and steps to reproduce.
In particular the error you are facing seems related to some stale data in the local datastore.
This may happen becasue of an ungraceful shutdown of the daemon.

Contributor

aboch commented Oct 12, 2016

@devel0
workaround: remove the file /var/lib/docker/network/files/local-kv.db then restart the daemon.
(Be aware you'll have to recreate your networks)

Also, this problem has been reported in more recent issues, and it is liley not related to this old GH issue which is in closed state and it is related to docker 1.0.0.

Feel free to update anyof the existing issue with your docker version and steps to reproduce.
In particular the error you are facing seems related to some stale data in the local datastore.
This may happen becasue of an ungraceful shutdown of the daemon.

@mavenugo

This comment has been minimized.

Show comment
Hide comment
@mavenugo

mavenugo Oct 12, 2016

Contributor

A lot has changed from docker 1.0.0 & using this issue for later versions isn't useful. I will lock this issue for now. we can unlock it if someone see a need for it.

Contributor

mavenugo commented Oct 12, 2016

A lot has changed from docker 1.0.0 & using this issue for later versions isn't useful. I will lock this issue for now. we can unlock it if someone see a need for it.

@moby moby locked and limited conversation to collaborators Oct 12, 2016

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.