New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Docker-proxy binds on ports but no container is running (anymore) #25981

Closed
nagua opened this Issue Aug 24, 2016 · 44 comments

Comments

Projects
None yet
@nagua
Copy link

nagua commented Aug 24, 2016

Output of docker version:

Client:
 Version:      1.12.1
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   23cf638
 Built:        Thu Aug 18 05:02:53 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.1
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   23cf638
 Built:        Thu Aug 18 05:02:53 2016
 OS/Arch:      linux/amd64

Output of docker info:

Containers: 6
 Running: 0
 Paused: 0
 Stopped: 6
Images: 93
Server Version: 1.12.1
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 211
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge null host overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options:
Kernel Version: 3.16.0-4-amd64
Operating System: Debian GNU/Linux 8 (jessie)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.748 GiB
Name: motocom.2hg.org
ID: O4YH:4KEL:GEY3:C26M:QWMN:PCLK:RF6E:DQSG:XPWG:SSBE:FVE3:GWBL
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No kernel memory limit support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
Insecure Registries:
 127.0.0.0/8

Additional environment details (AWS, VirtualBox, physical, etc.):
This is a dedicated server running Debian Jessie. Everything up to date.

# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
# netstat -ap
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 *:3389                  *:*                     LISTEN      890/openvpn
tcp        0      0 localhost:mysql         *:*                     LISTEN      1276/mysqld
tcp        0      0 *:ssh                   *:*                     LISTEN      727/sshd
tcp        0    336 xxx:ssh     ip1f10d8b9.dynami:60637 ESTABLISHED 1641/sshd: xxx
tcp        0      1 xxx:ssh     116.31.116.52:53774     FIN_WAIT1   -
tcp6       0      0 [::]:smtp               [::]:*                  LISTEN      6346/docker-proxy
tcp6       0      0 [::]:5280               [::]:*                  LISTEN      6366/docker-proxy
tcp6       0      0 [::]:imaps              [::]:*                  LISTEN      6316/docker-proxy
tcp6       0      0 [::]:5443               [::]:*                  LISTEN      6356/docker-proxy
tcp6       0      0 [::]:xmpp-client        [::]:*                  LISTEN      6386/docker-proxy
tcp6       0      0 [::]:submission         [::]:*                  LISTEN      6326/docker-proxy
tcp6       0      0 [::]:imap2              [::]:*                  LISTEN      6336/docker-proxy
tcp6       0      0 [::]:xmpp-server        [::]:*                  LISTEN      6376/docker-proxy
[...]

Steps to reproduce the issue:

  1. I upgraded from version 1.12.0 to 1.12.1.
  2. I tried to restart my containers but they failed to allocate ports.
  3. The ports are allocated by docker-proxy but no container is running.

Describe the results you received:
I can not start the containers because the ports are already in use.

Describe the results you expected:
Docker-proxy does not allocate ports docker does not use.

Additional information you deem important (e.g. issue happens only occasionally):

I already tried to restrict docker to ipv4 but this was not changing anything. I also tried to restart docker/server but docker also allocates many ports. I use docker-compose to manage my containers if this matters.

Are there more information needed? Is there a quick workaround available make docker think that the ports are not needed anymore?

@amitkumarj441

This comment has been minimized.

Copy link

amitkumarj441 commented Aug 24, 2016

Hey @nagua,

It isn't really possible to simply close a port from outside the application that opened the socket listening on it. The only way to do this is to completely kill the process that owns the port. Then, in about a minute or two, the port will become available again for use. Here's what's going on (if you don't care, skip to the end where I show you how to kill the process owning a particular port):

Anyway, here's how to kill a process that owns a particular port:

sudo netstat -ap | grep :<port_number>

That will output the line corresponding to the process holding port . Then, look in the last column, you'll see /. Then execute this:

kill <pid>

For more info: #6675 and #6682

@nagua

This comment has been minimized.

Copy link

nagua commented Aug 24, 2016

I now moved /var/lib/docker away, restarted docker and copied the volumes directory back in the newly created docker folder. Now everything is working again. If you need some files from there for debug purposes, i have the old docker folder lying around.

@amitkumarj441 the problem is, that docker-proxy was allocating the ports. And it was not possible to kill them, because they stayed as zombie processes. And the ports were not released. The only way to release the ports was to stop docker which would not allow me to use any containers anymore.

@sanimej

This comment has been minimized.

Copy link

sanimej commented Aug 24, 2016

@nagua I am not able to recreate this. I had some containers running with port published. When I restart the daemon docker-proxy processes also go down and the ports are available. I didn't try the upgrade from 1.12.0 to 1.12.1. But it shouldn't be very different from a daemon restart. Did you notice when the docker-proxy processes moved to zombie state ?

@kossmoss

This comment has been minimized.

Copy link

kossmoss commented Aug 26, 2016

Have same problem running docker (v 1.12.1) via docker-compose inside virtual machine running on Vagrant.
Noticed this problem several times after restarting virtual machine (probably this is sign of incorrect terminating docker daemon processes).
I have several occupied ports (and docker ps shows no any actual docker containers running at the same time)
Simple killing processes and restarting docker service (or even whole virtual machine) doesnt solve the problem. Every time docker daemon restarts, it brings those zombie docker-proxy processes again and ports are still unavailable for new containers.

I'm going to try to solve problem by saving virtual machine snapshot and after that will try to clear /var/lib/docker as @nagua did.

@cpuguy83

This comment has been minimized.

Copy link
Contributor

cpuguy83 commented Aug 26, 2016

@kossmoss Can you explain the "zombie docker-proxy processes" comment?
Do you see docker-proxy running while the daemon is not up?

@nagua

This comment has been minimized.

Copy link

nagua commented Aug 27, 2016

@cpuguy83 When I started the docker daemon it starts up the docker-proxy processes. For ports where no container exists anymore that uses these ports. Trying to kill the docker-proxy processes doesn't help because they do not exit and ends up in a zombie process state. So they do not release the ports.

So I had no way to use the ports again while docker was running.

@hopeseekr

This comment has been minimized.

Copy link

hopeseekr commented Aug 29, 2016

We have this same problem. It's binding to ports 3306 and 5900 that containers usually bind, but now none of those containers will start. Help!

@jwodder

This comment has been minimized.

Copy link

jwodder commented Sep 2, 2016

We appear to be experiencing this same problem.

docker version:

Client:
 Version:      1.12.0
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   8eab29e
 Built:        Thu Jul 28 22:11:10 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.0
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   8eab29e
 Built:        Thu Jul 28 22:11:10 2016
 OS/Arch:      linux/amd64

docker info:

Containers: 69
 Running: 55
 Paused: 0
 Stopped: 14
Images: 253
Server Version: 1.12.0
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 696
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: null host bridge overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: apparmor seccomp
Kernel Version: 4.4.0-36-generic
Operating System: Ubuntu 16.04 LTS
OSType: linux
Architecture: x86_64
CPUs: 16
Total Memory: 47.16 GiB
Name: container2
ID: CC32:U2QZ:HQTG:KK45:NYWG:LHF4:HVSE:UTEA:RYTR:3E4L:OS5S:MEVI
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Insecure Registries:
 127.0.0.0/8

After restarting the Docker daemon (in order to fix a presumably unrelated problem with an unstoppable container) on an instance with 70 containers, approximately 44 of which have exposed ports, four of the containers were unable to start (even after deleting & recreating them) because their ports were in use by docker-proxy instances. I tried killing one of the instances, but it just became a zombie, which Docker still hasn't reaped after about 10 minutes.

@kossmoss

This comment has been minimized.

Copy link

kossmoss commented Sep 2, 2016

@cpuguy83 sorry for the late reply. Probably, "zombie" is not proper definition for that - I was limited in time and didn't performed too much of investigations and can't say how exactly we need to call those processes. My way to look on currently opened ports is sudo netcat -tunlp command or sudo service docker status. In both cases, after docker daemon started, I see those docker-proxy processes occupying the ports, but in the same time see no any running docker containers in docker ps output.

When I kill docker-proxy process, it actually destroyed and port is not occupied until next docker daemon restart.

When docker daemon is stopped, there's no any running docker-proxy processes. They only running when docker daemon is running. Seems they are starting automatically at some point of docker daemon starting procedure. This also seems like docker holds those processes' state somewhere when stopped and brings them live when started, but does not link those processes to any running docker containers.

Maybe this problem caused by starting containers via docker-compose instead of simple docker run? @hopeseekr are you using docker-compose?

@rdavaillaud

This comment has been minimized.

Copy link

rdavaillaud commented Sep 5, 2016

Well, we've got the same problem, with compose 1.8, docker 1.12.1 on fedora 24.

@rdavaillaud

This comment has been minimized.

Copy link

rdavaillaud commented Sep 5, 2016

Well, I've found a workaround:

  • stop docker
  • remove all internal docker network: rm /var/lib/docker/network/files/
  • start docker
@sanimej

This comment has been minimized.

Copy link

sanimej commented Sep 5, 2016

@rdavaillaud @kossmoss I couldn't recreate the issue. But I wasn't using compose. All the reported occurrences seem to be happening with compose. Can you give the exact steps you are using that results in the issue ?

@aducos

This comment has been minimized.

Copy link

aducos commented Sep 6, 2016

I work with rdavaillaud and we have 4 pc using this docker and compose configuration, and this happend to only one of the pc one day after a computer reboot.

So we don't really know how to reproduce.

@rdavaillaud

This comment has been minimized.

Copy link

rdavaillaud commented Sep 6, 2016

This is our docker-compose.yml, nothing fancy here I think, and no network definition.

version: '2'
services:
  frontal:
    build:
      context: ./Docker/frontalWeb
    container_name: frontal
    extra_hosts:
     - "app.dev.local:127.0.0.1"
     - "www.app.dev.local:127.0.0.1"
     - "ws.app.dev.local:127.0.0.1"
    image: local/frontalweb
    links:
      - bdd:HBS-BDD
      - tilecache:carto.app.dev.local
      - tilecache:carto1.app.dev.local
      - tilecache:carto2.app.dev.local
      - tilecache:carto3.app.dev.local
      - tilecache:carto4.app.dev.local
    ports:
      - "80:80"
      - "443:443"
    privileged: true
    volumes:
      - /sys/fs/cgroup:/sys/fs/cgroup:ro
      - ./:/var/www/app/app-source
      - ./../app-datas:/var/local/app-datas
      - ./System/apache/:/etc/httpd/sites-enabled/
      - ./Docker/frontalWeb/docker-entrypoint.d:/docker-entrypoint.d
      - ./app.crt:/etc/ssl/app.com/app.com.cer
      - ./app.key:/etc/ssl/app.com/app.com.key
      - ./app.crt:/etc/ssl/app.com/app.crt

  tilecache:
    build:
      context: ./Docker/tilecache
    container_name: tilecache
    links:
      - mapserver:carto.app.dev.local
      - mapserver:carto1.app.dev.local
      - mapserver:carto2.app.dev.local
      - mapserver:carto3.app.dev.local
      - mapserver:carto4.app.dev.local
    ports:
      - "82:80"
    volumes:
      - ./Carto:/var/www/app/app/Carto
      - ./../mapfiles/app:/var/local/spatial/app
      - ./../app-datas:/var/local/app-datas
      - ./Docker/tilecache/docker-entrypoint.d:/docker-entrypoint.d

  mapserver:
    build:
      context: ./Docker/mapserver
    container_name: mapserver
    links:
      - bdd:HBS-BDD
    ports:
      - "81:80"
    volumes:
      - ./Carto:/var/www/app/app/Carto
      - ./../mapfiles/app:/var/local/spatial/app
      - ./../app-datas:/var/local/app-datas
      - ./Docker/mapserver/docker-entrypoint.d:/docker-entrypoint.d

  bdd:
    build:
      context: ./Docker/postgresql
    cap_add:
      - SYS_ADMIN
    container_name: bdd
    ports:
      - "5432:5432"
    privileged: true
    security_opt:
      - seccomp:unconfined
    volumes:
      - /sys/fs/cgroup:/sys/fs/cgroup:ro
      - ./DataBase:/var/www/app/app/DataBase
      - ./../app-datas-bdd:/var/local/app-datas
      - ./Docker/postgresql/docker-entrypoint.d:/docker-entrypoint.d

The docker containers are built from official centos7-systemd image.
The hosts are on Fedora 24, Docker-engine 1.12.1 and Compose 1.8.

We will tell you if it happens again.

@kossmoss

This comment has been minimized.

Copy link

kossmoss commented Sep 6, 2016

@sanimej As I mentioned before I noticed the problem after restarting virtual machine. So I know only one obvious step to try to reproduce the problem - restart machine (or even just turn the power off) having running containers (mind the docker-compose - maybe it's also important). Probably, restart several times. It's kinda cruel way, so I can't recommend to do that, but don't know any other way yet )

@glenpike

This comment has been minimized.

Copy link

glenpike commented Sep 6, 2016

removing /var/lib/docker/network/files/ worked for me.

@pcornelissen

This comment has been minimized.

Copy link

pcornelissen commented Sep 8, 2016

I'm having the same problem on docker for Mac.

docker info
Containers: 27
 Running: 4
 Paused: 0
 Stopped: 23
Images: 250
Server Version: 1.12.1
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 310
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge null host overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 4.4.19-moby
Operating System: Alpine Linux v3.4
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 2.934 GiB
Name: moby
ID: 2STL:DVY7:P6IW:M2IO:I35K:MP3R:LCE7:NPJQ:IQYN:5OSY:GHLJ:6VUR
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 66
 Goroutines: 82
 System Time: 2016-09-08T16:36:04.267966976Z
 EventsListeners: 1
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
 127.0.0.0/8

But I have no /var/lib/docker to clean up :-(
(I have used the "reset to factory settings" of docker for mac and thus lost all images, but the error is gone)

@cpuguy83

This comment has been minimized.

Copy link
Contributor

cpuguy83 commented Sep 8, 2016

@sanimej Is this some change that we are storing port state information in bolt?

@dursk

This comment has been minimized.

Copy link

dursk commented Sep 27, 2016

Just ran into this same issue w/ Docker for Mac, v1.12.1 and docker-compose v1.8

@aboch

This comment has been minimized.

Copy link
Contributor

aboch commented Oct 18, 2016

This issue is a side-effect of the changes added to support the --live-restore feature.

From docker 1.12.0 onward, when the bridge network driver comes up, it will restores the bridge network endpoints it finds in the store.
While doing this, it also restores the port bindings associated with the endpoint, if any.

Note:

  • Under normal condition at daemon boot, no endpoints are present in the store.
  • If stale endpoints are present (this is usually the case of an ungraceful shutdown of the daemon with running containers), they are expected to be removed during boot as part of the stale sandbox cleanup process run by libnetwork core.
  • If endpoints are present because of the live-restore, they will not be removed because the sandbox cleanup will not happen for the containers which are running.

The issue seems like the sandbox for stale endpoints from older docker version run is not present, therefore libnetwork core does not invoke the cleanup of the stale endpoints with the driver.

I believe the stale endpoints issue can be fixed by removing the networks and restarting the daemon. Because during the bridge endpoint restore, the endpoint is discarded and removed from store if the corresponding network is missing.

If the above does not work, one solution is to manually remove the problematic docker/network/v1.0/bridge-endpoint/<id> key value from the store. I just found out this cli tool to browse and modify a boltdb store, but did not have much luck using it so far (https://github.com/br0xen/boltbrowser).

Otherwise, last resort is to remove the /var/lib/docker/network/files/local-kv.db file, before starting the daemon.

On a side note, there is also a bug which will cause this issue. It is explained and fixed in docker/libnetwork#1504 and will be available in next release.

@pcornelissen

This comment has been minimized.

Copy link

pcornelissen commented Oct 19, 2016

Hi!

I am not 100% sure because it happened a while ago, but I think I removed everything including the networks and the problem remained. But maybe I didn't restart afterwards or something like that. Nevertheless thanks for the analysis!

@justincormack

This comment has been minimized.

Copy link
Contributor

justincormack commented Oct 19, 2016

cc @djs55

@Luzifer

This comment has been minimized.

Copy link

Luzifer commented Oct 20, 2016

Confirmed temp-fix:

# systemctl stop docker; rm /var/lib/docker/network/files/local-kv.db; systemctl start docker
# docker ps -a | awk '{print $1}' | docker rm -f

Afterwards a clean env is available and containers are able to bind to their respective ports.

@aarnaud

This comment has been minimized.

Copy link

aarnaud commented Oct 20, 2016

docker ps -a | awk '{print $1}' | docker rm -f

is it really necessary ?

Like @rdavaillaud this workaround work for me:

systemctl stop docker
rm -rf /var/lib/docker/network/files
systemctl start docker
@jeanpralo

This comment has been minimized.

Copy link

jeanpralo commented Nov 9, 2016

Workaround of removing /var/lib/docker/network/files/local-kv.db works.

The problem it creates though is that if you happen to have containers sharing the same network then you will have to make sure you restart all of them :(

This is quite a big problem when you have a server crashing and your containers do not come back up because of that. Any chance of a fix regarding what @aboch found ? @thaJeztah

@ghost

This comment has been minimized.

Copy link

ghost commented Nov 10, 2016

Coworker had this exact problem, changing fs driver aufs->devicemapper helped. Removing stuff from /var/ib/docker didn't help.

@marcelmfs

This comment has been minimized.

Copy link

marcelmfs commented Dec 21, 2016

I just saw this behaviour in 1.12.1, docker-compose 1.8.1.

We have some short lived containers that do bind to some ports for registration purposes, and after some runs, docker-proxy were still running referencing old runs of already stopped & removed containers, and then at some point docker reuses one of the IPs but the iptables rules are already in place and it causes all sorts of routing problems (trying to request one service reaches another).

This is really bad. It will happen in production (given that we expect docker to be a long lived process).

@cpuguy83

This comment has been minimized.

Copy link
Contributor

cpuguy83 commented Dec 21, 2016

@marcelmfs please update to 1.12.5.
The above mentioned bug should be fixed there (as of 1.12.3)

@marcelmfs

This comment has been minimized.

Copy link

marcelmfs commented Dec 21, 2016

@cpuguy thanks, I'll do it asap.

@rohitsakala

This comment has been minimized.

Copy link

rohitsakala commented May 12, 2017

Hi, I faced the same problem. My docker version is 1.12.0. Is this fixed. If it is, I will update my docker.

Thanks.

@amitkumarj441

This comment has been minimized.

Copy link

amitkumarj441 commented May 12, 2017

@rohitsakala Your Docker v1.12.0 is old and this problem is fixed in newest docker v1.13.x and it seems like you have docker-compose v1.8, if you want to continue with this Docker version then removing /var/lib/docker/network/files/ will works for your Docker version.

@rohitsakala

This comment has been minimized.

Copy link

rohitsakala commented May 12, 2017

@amitkumarj441 Thanks. I will update the docker as I need a permanent solution to it.

@thaJeztah thaJeztah added this to the 1.12.3 milestone May 15, 2017

@thaJeztah

This comment has been minimized.

Copy link
Member

thaJeztah commented May 15, 2017

Yes, let me close this issue because the issue reported here was resolved in 1.12.3

@thaJeztah thaJeztah closed this May 15, 2017

@Davor111

This comment has been minimized.

Copy link

Davor111 commented Jul 14, 2017

I just had this exact issue happen to me on Docker version 1.13.1, build 092cba3, docker-compose version 1.10.0, build 4bd6f1a running Ubuntu 14.04 with just 5 containers running a low traffic Wordpress instance. For routing I was using an nginx image straight from docker hub with a couple of conf files for reverse proxying. I was able to restore the setup by using the method mentioned above (stopping docker all together, removing the network files and starting it all up again).

Honestly I'm a little surprised as this setup was running flawlessly for the past 6 months. The only recent change was the addition of certificates to the nginx reverse proxy. Please let me know, if I should share docker info, etc.

yyb196 pushed a commit to yyb196/moby that referenced this issue Jul 31, 2017

沈陵
Fixes moby#25981
fixes bugs that mistook gw6 for gw which
causes docker-proxy hangover when container
with port mapping stopped after dockerd
restarted

Signed-off-by: 沈陵 <shenling.yyb@taobao.com>
@Lykathia

This comment has been minimized.

Copy link

Lykathia commented Mar 5, 2018

Just had this happen with Docker version 18.02.0-ce, build fc4de447b5

stop, removing the local-kv.db file, and starting again resolved the issue

@jstoja

This comment has been minimized.

Copy link
Contributor

jstoja commented May 17, 2018

@Lykathia No need to delete the local-kv.db file.

  1. kill the docker-proxy process owning the port
  2. restart the docker daemon
@Lykathia

This comment has been minimized.

Copy link

Lykathia commented May 23, 2018

@jstoja just had the same error today (this time on version 18.05.0-ce, build f150324782). Killing the processing and restarting did not address the issue.

Removing the local-kv.db file again did solve it.

The issue this time happened after a power outage in the server room. I'm guessing something didn't get cleaned up correctly.
/shrug

@Ghoughpteighbteau

This comment has been minimized.

Copy link

Ghoughpteighbteau commented Jun 13, 2018

I just had this issue occur with me as well while testing out some swarm networking issues

Client:
 Version:      18.05.0-ce
 API version:  1.37
 Go version:   go1.10.2
 Git commit:   f150324782
 Built:        Wed May 16 22:27:45 2018
 OS/Arch:      linux/amd64
 Experimental: false
 Orchestrator: swarm

Server:
 Engine:
  Version:      18.05.0-ce
  API version:  1.37 (minimum version 1.12)
  Go version:   go1.10.2
  Git commit:   f150324782
  Built:        Wed May 16 22:28:17 2018
  OS/Arch:      linux/amd64
  Experimental: false

killing the process did not work. systemctl stop docker stopped the docker-proxy process and no longer bound to the port, when I started it back up it rebound to the port. I had to stop docker and remove local-kv.db file to prevent docker from pointlessly binding to that port.

@ihulsbus

This comment has been minimized.

Copy link

ihulsbus commented Sep 8, 2018

Issue occurs here too.

Output of docker version:

Client:
 Version:           18.06.1-ce
 API version:       1.30 (downgraded from 1.38)
 Go version:        go1.10.3
 Git commit:        e68fc7a
 Built:             Tue Aug 21 17:24:51 2018
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          17.06.2-ce
  API version:      1.30 (minimum version 1.12)
  Go version:       go1.8.3
  Git commit:       a04f55b
  Built:            Thu Sep 21 20:36:57 2017
  OS/Arch:          linux/amd64
  Experimental:     false

Output of docker info:

Containers: 4
 Running: 3
 Paused: 0
 Stopped: 1
Images: 4
Server Version: 17.06.2-ce
Storage Driver: aufs
 Root Dir: /var/snap/docker/common/var-lib-docker/aufs
 Backing Filesystem: extfs
 Dirs: 28
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 6e23458c129b551d5c9871e5174f6b1b7f6d1170
runc version: 810190ceaa507aa2727d7ae6f4790c76ec150bd2
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.15.0-33-generic
Operating System: Ubuntu Core 16
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 7.466GiB
Name: server
ID: XM3Y:EDCJ:JXRQ:QET5:LEBD:JURD:NKZA:XABF:YHQR:DPC6:O7FD:6X5J
Docker Root Dir: /var/snap/docker/common/var-lib-docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 34
 Goroutines: 38
 System Time: 2018-09-08T18:43:13.438017572Z
 EventsListeners: 0
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support
@Ezwen

This comment has been minimized.

Copy link

Ezwen commented Sep 11, 2018

I encountered something similar this morning: a container had exited and could not restart because its port was still in use by the docker-proxy process. I has to restart docker entirely to fix the issue.

# docker info
Containers: 27
 Running: 25
 Paused: 0
 Stopped: 2
Images: 29
Server Version: 18.06.1-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: journald
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
 seccomp
  Profile: default
Kernel Version: 4.9.0-8-amd64
Operating System: Debian GNU/Linux 9 (stretch)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.859GiB
Name: hallownest
ID: IFMK:N5V6:TADM:HFRT:4PEC:HMR5:2VJP:OOML:IA5W:USWR:EILG:SDS4
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support
@Ezwen

This comment has been minimized.

Copy link

Ezwen commented Nov 2, 2018

(note: I still encounter the problem regularly, and still have each time to restart the whole docker daemon to solve it)

@nemke

This comment has been minimized.

Copy link

nemke commented Dec 12, 2018

This occurred to me on my local machine (which I use for development), after docker unrelated system freeze and manual restart.

After I deleted local-kv.db, docker-proxy is no longer listening on non running containers ports.

@mellena1

This comment has been minimized.

Copy link

mellena1 commented Dec 19, 2018

I have this same problem on a ubuntu 16.04 ec2 instance using docker-compose and restart: always. The container is fails on startup because it is unable to get IAM permissions (I'm assuming the container starts before the IAM stuff gets set up and starts working), and then tries to restart but fails because a docker-proxy process still has its port held.

Only fix is running service docker restart, but this is not ideal as in an AWS autoscaling group the service never becomes healthy.

@dragon9783

This comment has been minimized.

Copy link

dragon9783 commented Dec 28, 2018

same issue on docker 18.03.1-ce, remove all of container, restart docker daemon, then solved it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment