Container crash leaves iptables DOCKER / POSTROUTING chain in unclean state #15151

Closed
falschparker82 opened this Issue Jul 30, 2015 · 16 comments

Comments

Projects
None yet
@falschparker82

Hi there,

we observed an issue yesterday on Docker 1.6.2 (not sure if it's fixed by docker/libnetwork#301 moby/moby#13957 , but probably not), running on Ubuntu 14.04

A container mysteriously crashed, so I restarted it via docker-compose up -d. Container was configured to expose internal port 8080 as external port 1336. Now came the curious part: From the host, the container was reachable under localhost:1336, but NOT under $HOST_PRIVATE_IP:1336

This was a situation that "shouldn't ever happen" as we saw it and took us quite a few hours to find out. Finally, we found the culprit in the iptables:

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
[...]
MASQUERADE  tcp  --  ip-172-17-0-154.eu-west-1.compute.internal  ip-172-17-0-154.eu-west-1.compute.internal  tcp dpt:http-alt
[...]
MASQUERADE  tcp  --  ip-172-17-0-212.eu-west-1.compute.internal  ip-172-17-0-212.eu-west-1.compute.internal  tcp dpt:http-alt

Chain DOCKER (2 references)
target     prot opt source               destination         
DNAT       tcp  --  anywhere             anywhere             tcp dpt:1336 to:172.17.0.154:8080
[...]
DNAT       tcp  --  anywhere             anywhere             tcp dpt:1336 to:172.17.0.212:8080

Docker apparently never cleaned up the old DNAT entry, so it was conflicting with the newer one and shadowing it.

The docker daemon was never restarted throughout the whole issue and investigation (mainly because we wanted to find the culprit). We could not find out what exactly crashed the container - but not the daemon - though.

If this is somehow still an issue, it seems rather easy to fix by always checking for duplicate external port configurations (which should never happen) when inserting the iptable rules. I'm unfortunately not too deep within the codebase, so I can't fix this and don't know if it's still an issue.

Best,

Dominik

@mrjana

This comment has been minimized.

Show comment
Hide comment
@mrjana

mrjana Jul 30, 2015

Contributor

@falschparker82 This most likely should not be a problem in 1.7.1 because we do remove all the port mapping iptable rules when the container stops. And from what you are saying it looks like your container crashed while the daemon was still running. In that situation the daemon should always be able to detect the container going down(gracefully or ungracefully) and trigger the cleanup. Can you please try 1.7.1 and let us know?

Contributor

mrjana commented Jul 30, 2015

@falschparker82 This most likely should not be a problem in 1.7.1 because we do remove all the port mapping iptable rules when the container stops. And from what you are saying it looks like your container crashed while the daemon was still running. In that situation the daemon should always be able to detect the container going down(gracefully or ungracefully) and trigger the cleanup. Can you please try 1.7.1 and let us know?

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Aug 9, 2015

Member

@falschparker82 have you been able to test if this is resolved with 1.7.1? Thanks!

Member

thaJeztah commented Aug 9, 2015

@falschparker82 have you been able to test if this is resolved with 1.7.1? Thanks!

@w4-sjcho

This comment has been minimized.

Show comment
Hide comment
@w4-sjcho

w4-sjcho Aug 11, 2015

I'm using AWS elastic beanstalk which runs docker 1.6.2, and seems like I'm also affected by this. Took me hours to figure out the problem.

As it's hard for us to update beanstalk's docker version, is there any workaround for this? What I can do is running some commands before starting up the docker containers.

Also is it safe to just manually remove all iptables forwarding entries that point to old docker containers?

I'm using AWS elastic beanstalk which runs docker 1.6.2, and seems like I'm also affected by this. Took me hours to figure out the problem.

As it's hard for us to update beanstalk's docker version, is there any workaround for this? What I can do is running some commands before starting up the docker containers.

Also is it safe to just manually remove all iptables forwarding entries that point to old docker containers?

@peterkuiper

This comment has been minimized.

Show comment
Hide comment
@peterkuiper

peterkuiper Aug 14, 2015

I'm also seeing this issue (or something similar) with 1.8.1:

docker version
Client:
 Version:      1.8.1
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   d12ea79
 Built:        Thu Aug 13 19:47:52 UTC 2015
 OS/Arch:      darwin/amd64

Server:
 Version:      1.8.1
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   d12ea79
 Built:        Thu Aug 13 02:49:29 UTC 2015
 OS/Arch:      linux/amd64

I used docker-compose up. After stopping the containers the iptables entries are still there..

After running docker-compose up:

root@dev:/etc# iptables -L -n -v
Chain INPUT (policy ACCEPT 149 packets, 14268 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0
    0     0 DROP       all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0

Chain OUTPUT (policy ACCEPT 130 packets, 34005 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain DOCKER (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.17          tcp dpt:5000
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.17          172.17.0.16          tcp dpt:6379
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.16          172.17.0.17          tcp spt:6379
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.17          172.17.0.16          tcp dpt:6379
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.16          172.17.0.17          tcp spt:6379
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.17          172.17.0.16          tcp dpt:6379
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.16          172.17.0.17          tcp spt:6379

CTR-c:

$  docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
$ docker ps -a
CONTAINER ID        IMAGE                COMMAND                  CREATED              STATUS                            PORTS               NAMES
59dc811cc019        dockertest_web       "/bin/sh -c 'python a"   About a minute ago   Exited (137) 6 seconds ago                            dockertest_web_1
9f772dea80aa        redis                "/entrypoint.sh redis"   About a minute ago   Exited (0) 16 seconds ago                             dockertest_redis_1

Rules are still there:

$ iptables -L -n -v
Chain INPUT (policy ACCEPT 64 packets, 6578 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0
    0     0 DROP       all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0

Chain OUTPUT (policy ACCEPT 42 packets, 9837 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain DOCKER (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.17          172.17.0.16          tcp dpt:6379
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.16          172.17.0.17          tcp spt:6379
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.17          172.17.0.16          tcp dpt:6379
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.16          172.17.0.17          tcp spt:6379
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.17          172.17.0.16          tcp dpt:6379
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.16          172.17.0.17          tcp spt:6379

Another docker-compose up:

$ iptables -L -n -v
Chain INPUT (policy ACCEPT 317 packets, 32548 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0
    0     0 DROP       all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0

Chain OUTPUT (policy ACCEPT 283 packets, 58144 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain DOCKER (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.17          172.17.0.16          tcp dpt:6379
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.16          172.17.0.17          tcp spt:6379
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.17          172.17.0.16          tcp dpt:6379
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.16          172.17.0.17          tcp spt:6379
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.17          172.17.0.16          tcp dpt:6379
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.16          172.17.0.17          tcp spt:6379
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.18          tcp dpt:3000
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.21          tcp dpt:80
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.21          172.17.0.18          tcp dpt:3000
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.18          172.17.0.21          tcp spt:3000
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.21          172.17.0.18          tcp dpt:3000
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.18          172.17.0.21          tcp spt:3000
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.21          172.17.0.18          tcp dpt:3000
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.18          172.17.0.21          tcp spt:3000

I'm also not sure why icc=true by default on the docker daemon. It seems no isolation is being provided though the docker-compose docs state that: "Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment:". It seems that this is not true, or I am overlooking something.

I am running this on my Mac using virtualbox. I created the docker vm with docker-machine.

I'm also seeing this issue (or something similar) with 1.8.1:

docker version
Client:
 Version:      1.8.1
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   d12ea79
 Built:        Thu Aug 13 19:47:52 UTC 2015
 OS/Arch:      darwin/amd64

Server:
 Version:      1.8.1
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   d12ea79
 Built:        Thu Aug 13 02:49:29 UTC 2015
 OS/Arch:      linux/amd64

I used docker-compose up. After stopping the containers the iptables entries are still there..

After running docker-compose up:

root@dev:/etc# iptables -L -n -v
Chain INPUT (policy ACCEPT 149 packets, 14268 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0
    0     0 DROP       all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0

Chain OUTPUT (policy ACCEPT 130 packets, 34005 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain DOCKER (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.17          tcp dpt:5000
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.17          172.17.0.16          tcp dpt:6379
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.16          172.17.0.17          tcp spt:6379
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.17          172.17.0.16          tcp dpt:6379
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.16          172.17.0.17          tcp spt:6379
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.17          172.17.0.16          tcp dpt:6379
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.16          172.17.0.17          tcp spt:6379

CTR-c:

$  docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
$ docker ps -a
CONTAINER ID        IMAGE                COMMAND                  CREATED              STATUS                            PORTS               NAMES
59dc811cc019        dockertest_web       "/bin/sh -c 'python a"   About a minute ago   Exited (137) 6 seconds ago                            dockertest_web_1
9f772dea80aa        redis                "/entrypoint.sh redis"   About a minute ago   Exited (0) 16 seconds ago                             dockertest_redis_1

Rules are still there:

$ iptables -L -n -v
Chain INPUT (policy ACCEPT 64 packets, 6578 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0
    0     0 DROP       all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0

Chain OUTPUT (policy ACCEPT 42 packets, 9837 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain DOCKER (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.17          172.17.0.16          tcp dpt:6379
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.16          172.17.0.17          tcp spt:6379
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.17          172.17.0.16          tcp dpt:6379
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.16          172.17.0.17          tcp spt:6379
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.17          172.17.0.16          tcp dpt:6379
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.16          172.17.0.17          tcp spt:6379

Another docker-compose up:

$ iptables -L -n -v
Chain INPUT (policy ACCEPT 317 packets, 32548 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0
    0     0 DROP       all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0

Chain OUTPUT (policy ACCEPT 283 packets, 58144 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain DOCKER (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.17          172.17.0.16          tcp dpt:6379
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.16          172.17.0.17          tcp spt:6379
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.17          172.17.0.16          tcp dpt:6379
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.16          172.17.0.17          tcp spt:6379
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.17          172.17.0.16          tcp dpt:6379
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.16          172.17.0.17          tcp spt:6379
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.18          tcp dpt:3000
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.21          tcp dpt:80
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.21          172.17.0.18          tcp dpt:3000
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.18          172.17.0.21          tcp spt:3000
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.21          172.17.0.18          tcp dpt:3000
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.18          172.17.0.21          tcp spt:3000
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.21          172.17.0.18          tcp dpt:3000
    0     0 ACCEPT     tcp  --  docker0 docker0  172.17.0.18          172.17.0.21          tcp spt:3000

I'm also not sure why icc=true by default on the docker daemon. It seems no isolation is being provided though the docker-compose docs state that: "Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment:". It seems that this is not true, or I am overlooking something.

I am running this on my Mac using virtualbox. I created the docker vm with docker-machine.

@retornam

This comment has been minimized.

Show comment
Hide comment
@retornam

retornam Sep 16, 2015

@falschparker82 @peterkuiper I'm also running 1.6.2 and ran into this same issue twice this week. Have you been able to figure out any workarounds for this? Thanks

@falschparker82 @peterkuiper I'm also running 1.6.2 and ran into this same issue twice this week. Have you been able to figure out any workarounds for this? Thanks

@peterkuiper

This comment has been minimized.

Show comment
Hide comment
@peterkuiper

peterkuiper Sep 17, 2015

@retornam No I haven't. I just removed the vm and started over. I'm not sure if this was a one-time thing or if I'm still having this problem. If I am I'm just ignoring it :) Have you tried running 1.8.x?

@retornam No I haven't. I just removed the vm and started over. I'm not sure if this was a one-time thing or if I'm still having this problem. If I am I'm just ignoring it :) Have you tried running 1.8.x?

@mindscratch

This comment has been minimized.

Show comment
Hide comment
@mindscratch

mindscratch Sep 30, 2015

I'm having similar issues with 1.7.1 on CentOS 7. I'm unable to update to docker 1.8.2 because we're running registry 2.1.1 (backed by NFS) and this issue is a blocker.

The only "fix" for us is to restart docker which is a huge bummer.

I'm having similar issues with 1.7.1 on CentOS 7. I'm unable to update to docker 1.8.2 because we're running registry 2.1.1 (backed by NFS) and this issue is a blocker.

The only "fix" for us is to restart docker which is a huge bummer.

@carmstrong

This comment has been minimized.

Show comment
Hide comment
@carmstrong

carmstrong Nov 20, 2015

Contributor

We have a user seeing this on Docker 1.7.1 on CoreOS. Confirmed that restarting Docker resolves the issue.

Contributor

carmstrong commented Nov 20, 2015

We have a user seeing this on Docker 1.7.1 on CoreOS. Confirmed that restarting Docker resolves the issue.

@carmstrong

This comment has been minimized.

Show comment
Hide comment
@carmstrong

carmstrong Nov 20, 2015

Contributor

What repro steps does the Docker team need to investigate this? It seems pretty easy to have a running container crash and have the daemon not clean up its rules.

Contributor

carmstrong commented Nov 20, 2015

What repro steps does the Docker team need to investigate this? It seems pretty easy to have a running container crash and have the daemon not clean up its rules.

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Nov 23, 2015

Member

@carmstrong I'm not entirely sure a reproducible case for 1.7.x is helpful, unless it's still reproducible on 1.9.x, given that we don't perform back ports to older releases. However, if you're having a minimal test-case, perhaps someone is able to see if it's still reproducible on 1.9.x

Member

thaJeztah commented Nov 23, 2015

@carmstrong I'm not entirely sure a reproducible case for 1.7.x is helpful, unless it's still reproducible on 1.9.x, given that we don't perform back ports to older releases. However, if you're having a minimal test-case, perhaps someone is able to see if it's still reproducible on 1.9.x

@sempasha

This comment has been minimized.

Show comment
Hide comment
@sempasha

sempasha Jan 14, 2016

Docker version 1.9.1, build a34a1d5
We have come into this issue right now. After some investigations we found exact the same - port forwarding rules still in iptables since container has been crashed hours ago.

Docker version 1.9.1, build a34a1d5
We have come into this issue right now. After some investigations we found exact the same - port forwarding rules still in iptables since container has been crashed hours ago.

@aboch

This comment has been minimized.

Show comment
Hide comment
@aboch

aboch Jan 14, 2016

Contributor

Yes I was able to reproduce this with latest master.
I closed the shell where the container with exposed ports was running and I see libnetwork is not being informed of the container going down. So no cleanup is happening.

The only logs showing are:

DEBU[0577] Closing buffered stdin pipe                  
DEBU[0577] attach: stdin: end                           
DEBU[0577] attach: stderr: end                          
DEBU[0577] attach: stdout: end

I don't know if daemon can detect all containers crashes.

Edit: the above does not kill the container...

Contributor

aboch commented Jan 14, 2016

Yes I was able to reproduce this with latest master.
I closed the shell where the container with exposed ports was running and I see libnetwork is not being informed of the container going down. So no cleanup is happening.

The only logs showing are:

DEBU[0577] Closing buffered stdin pipe                  
DEBU[0577] attach: stdin: end                           
DEBU[0577] attach: stderr: end                          
DEBU[0577] attach: stdout: end

I don't know if daemon can detect all containers crashes.

Edit: the above does not kill the container...

@GordonTheTurtle

This comment has been minimized.

Show comment
Hide comment
@GordonTheTurtle

GordonTheTurtle Jan 21, 2016

USER POLL

The best way to get notified of updates is to use the Subscribe button on this page.

Please don't use "+1" or "I have this too" comments on issues. We automatically
collect those comments to keep the thread short.

The people listed below have upvoted this issue by leaving a +1 comment:

@LRancez

USER POLL

The best way to get notified of updates is to use the Subscribe button on this page.

Please don't use "+1" or "I have this too" comments on issues. We automatically
collect those comments to keep the thread short.

The people listed below have upvoted this issue by leaving a +1 comment:

@LRancez

@f-wong

This comment has been minimized.

Show comment
Hide comment
@f-wong

f-wong May 11, 2016

Seeing the same issue on Docker 1.7.1 in CoreOS. Had to manually do a docker run and bind to different IP/port pairs in order to figure out what was the issue. At the end, I backed up iptables and manually deleted the offending entries and it worked.

f-wong commented May 11, 2016

Seeing the same issue on Docker 1.7.1 in CoreOS. Had to manually do a docker run and bind to different IP/port pairs in order to figure out what was the issue. At the end, I backed up iptables and manually deleted the offending entries and it worked.

@aboch

This comment has been minimized.

Show comment
Hide comment
@aboch

aboch Sep 14, 2016

Contributor

This no longer happens in recent docker versions.
If the container crashes, containerd will notify the daemon which in turn will ask libnetwork to release the resources held by the container.

Tested killing the container pid in 1.11.2 and 1.12.1 and the corresponding rules in the nat table are correctly removed.
I think this issue can now be closed.

Contributor

aboch commented Sep 14, 2016

This no longer happens in recent docker versions.
If the container crashes, containerd will notify the daemon which in turn will ask libnetwork to release the resources held by the container.

Tested killing the container pid in 1.11.2 and 1.12.1 and the corresponding rules in the nat table are correctly removed.
I think this issue can now be closed.

@LK4D4

This comment has been minimized.

Show comment
Hide comment
@LK4D4

LK4D4 Sep 14, 2016

Contributor

@aboch Nice, thank you!

Contributor

LK4D4 commented Sep 14, 2016

@aboch Nice, thank you!

@LK4D4 LK4D4 closed this Sep 14, 2016

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment