New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to receive UDP traffic after container restart #8795

Closed
mpeterss opened this Issue Oct 27, 2014 · 88 comments

Comments

Projects
None yet
@mpeterss

mpeterss commented Oct 27, 2014

I start a container and share a port for UDP traffic as this:

docker run —rm -p 5060:5060/udp —name host1 -i -t ubuntu:14.04

Then in that container I wait for traffic with:

nc -u -l 5060

I then generate traffic from another machine:

nc -u <docker_host_ip> 5060

Then everything works fine and I can see that I receive the UDP traffic in the container.

But when I exit the container and do the same thing again, then I can no longer receive UDP traffic in the docker container.
If I wait for about 5 minutes before I start to send it will work though. I have also noticed that if the sender change the port it is binding to locally it will also work. So there seems to be some mapping that is not deleted when the docker container is removed.

@liyichao

This comment has been minimized.

Show comment
Hide comment
@liyichao

liyichao Nov 29, 2014

This issue is due to conntrack. The linux kernel keeps state of each connection. Even though
udp is connectionless, if you use

sudo cat /proc/net/ip_conntrack

you will see a lot of entries. The output shows that the container address is still the last one before restart, the state also prevents packet form arriving at the new container, the reason is this:

For a connection, the first packet will go through the iptables's NAT table, and that's where docker routes packet to its own chain, then to the right container.

When you restart the container, container's ip has changed, so the DNAT rule, which will route to the new address. But the old connection's state in conntrack is not cleared. So when a packet arrives, it will not go through NAT table again, because it is not "the first" packet. So the solution is clearing the conntrack, which can be done as follows:

sudo conntrack -D -p udp

(you will need sudo apt-get install conntrack)

Looking forward to Docker's solution.

liyichao commented Nov 29, 2014

This issue is due to conntrack. The linux kernel keeps state of each connection. Even though
udp is connectionless, if you use

sudo cat /proc/net/ip_conntrack

you will see a lot of entries. The output shows that the container address is still the last one before restart, the state also prevents packet form arriving at the new container, the reason is this:

For a connection, the first packet will go through the iptables's NAT table, and that's where docker routes packet to its own chain, then to the right container.

When you restart the container, container's ip has changed, so the DNAT rule, which will route to the new address. But the old connection's state in conntrack is not cleared. So when a packet arrives, it will not go through NAT table again, because it is not "the first" packet. So the solution is clearing the conntrack, which can be done as follows:

sudo conntrack -D -p udp

(you will need sudo apt-get install conntrack)

Looking forward to Docker's solution.

@SvenDowideit SvenDowideit added the bug label Dec 1, 2014

@ljakob

This comment has been minimized.

Show comment
Hide comment
@ljakob

ljakob Dec 19, 2014

Same problem on my side (openvpn within a container). I could resolve it temporary with

iptables --table raw --append PREROUTING --protocol udp --source-port 4000 --destination-port 4000 --jump NOTRACK

Running on docker host. It's ugly but gets the job done.

IMHO the correct solution would be to clean up to conntrack-table after adjusting iptables.

ljakob commented Dec 19, 2014

Same problem on my side (openvpn within a container). I could resolve it temporary with

iptables --table raw --append PREROUTING --protocol udp --source-port 4000 --destination-port 4000 --jump NOTRACK

Running on docker host. It's ugly but gets the job done.

IMHO the correct solution would be to clean up to conntrack-table after adjusting iptables.

@blalor

This comment has been minimized.

Show comment
Hide comment
@blalor

blalor Jan 5, 2015

Definitely looking forward to a fix for this one.

blalor commented Jan 5, 2015

Definitely looking forward to a fix for this one.

@LK4D4

This comment has been minimized.

Show comment
Hide comment
@LK4D4

LK4D4 Jan 5, 2015

Contributor

Seems like working for me with 3.18.0 kernel.

Contributor

LK4D4 commented Jan 5, 2015

Seems like working for me with 3.18.0 kernel.

@erikh

This comment has been minimized.

Show comment
Hide comment
@erikh

erikh Jan 5, 2015

Contributor

The UDP proxy has always had issues with packet loss, we've never found
a good answer for it.

-Erik

On Mon, Jan 5, 2015 at 9:26 AM, Alexander Morozov
notifications@github.com wrote:

Seems like working for me with 3.18.0 kernel.


Reply to this email directly or view it on GitHub.

Contributor

erikh commented Jan 5, 2015

The UDP proxy has always had issues with packet loss, we've never found
a good answer for it.

-Erik

On Mon, Jan 5, 2015 at 9:26 AM, Alexander Morozov
notifications@github.com wrote:

Seems like working for me with 3.18.0 kernel.


Reply to this email directly or view it on GitHub.

@blalor

This comment has been minimized.

Show comment
Hide comment
@blalor

blalor Jan 6, 2015

I'm using CentOS 6.6, kernel 2.6.32-504.1.3.el6.x86_64. Seems like Docker should be responsible for (or at least facilitate through configuration) expiring conntrack table entries.

blalor commented Jan 6, 2015

I'm using CentOS 6.6, kernel 2.6.32-504.1.3.el6.x86_64. Seems like Docker should be responsible for (or at least facilitate through configuration) expiring conntrack table entries.

@technolo-g

This comment has been minimized.

Show comment
Hide comment
@technolo-g

technolo-g Feb 3, 2015

I too would like to see some real solution to this.

technolo-g commented Feb 3, 2015

I too would like to see some real solution to this.

@nmarasoiu

This comment has been minimized.

Show comment
Hide comment
@nmarasoiu

nmarasoiu Mar 5, 2015

Hi, we would also need to know when this issue makes progress, what the impediments to fix this bug? Can we help in any way with details? We run Consul and at some point (I guess after some restarts), the nodes start "suspecting each other" (per gossip protocol); the nodes can receive the udp that they are being suspected and they try to reply with hey i am alive, but the reply never reaches destination.

Is this a priority? is it hard to reproduce or debug? can we help with more concrete data?
i reproduced it with kernel 3.13

nmarasoiu commented Mar 5, 2015

Hi, we would also need to know when this issue makes progress, what the impediments to fix this bug? Can we help in any way with details? We run Consul and at some point (I guess after some restarts), the nodes start "suspecting each other" (per gossip protocol); the nodes can receive the udp that they are being suspected and they try to reply with hey i am alive, but the reply never reaches destination.

Is this a priority? is it hard to reproduce or debug? can we help with more concrete data?
i reproduced it with kernel 3.13

@grimmy

This comment has been minimized.

Show comment
Hide comment
@grimmy

grimmy May 7, 2015

Flushing the conntrack table worked for me, but I'm running on a dev machine and not prod, I'll have to give @liyichao's answer a go if/when we hit this in prod.

grimmy commented May 7, 2015

Flushing the conntrack table worked for me, but I'm running on a dev machine and not prod, I'll have to give @liyichao's answer a go if/when we hit this in prod.

@grimmy

This comment has been minimized.

Show comment
Hide comment
@grimmy

grimmy May 12, 2015

Is there any reason why the conntrack entries can't just be removed when docker determines a container stopped?

grimmy commented May 12, 2015

Is there any reason why the conntrack entries can't just be removed when docker determines a container stopped?

@ljakob

This comment has been minimized.

Show comment
Hide comment
@ljakob

ljakob May 13, 2015

@grimmy No, fix should be not too difficult to implement. After removing iptables-entries just call conntrack --delete with similar arguments (ip + port)

ljakob commented May 13, 2015

@grimmy No, fix should be not too difficult to implement. After removing iptables-entries just call conntrack --delete with similar arguments (ip + port)

@grimmy

This comment has been minimized.

Show comment
Hide comment
@grimmy

grimmy May 13, 2015

ok, that's what i figured. I'll see if i can find some time to put a pull request together unless someone else wants to jump on it.

grimmy commented May 13, 2015

ok, that's what i figured. I'll see if i can find some time to put a pull request together unless someone else wants to jump on it.

@nmarasoiu

This comment has been minimized.

Show comment
Hide comment
@nmarasoiu

nmarasoiu May 22, 2015

Hi,

I applied a patch in the cleanup callback of mapper.go, adding conntrack delete for the container ip as the source ip in 3 places in mapper.go, including Unmap, and cleanup functions. Did not succeed ie. serf gossip protocol which i run over udp complains that packages do not make across and blacklist other nodes in their memberlist. Either there must be other places to do this, or this should be also done on the remote nodes.

Normally this should be done via accessible "objects", but I have not found a suitable one, either in docker or as a golang import, and started by calling a command in the OS (which of course is not a portable solution, but one to check assumptions).

cleanup := func() error {
    // need to undo the iptables rules before we return
    if m.userlandProxy != nil {
        m.userlandProxy.Stop()
    }
    pm.forward(iptables.Delete, m.proto, hostIP, allocatedHostPort, containerIP.String(), containerPort)
    if err := pm.Allocator.ReleasePort(hostIP, m.proto, allocatedHostPort); err != nil {
        return err
    }
    exec.Command("/usr/sbin/conntrack", "-D", "-s", containerIP.String()).Run()
    return exec.Command("/usr/sbin/conntrack", "-F").Run()
}

nmarasoiu commented May 22, 2015

Hi,

I applied a patch in the cleanup callback of mapper.go, adding conntrack delete for the container ip as the source ip in 3 places in mapper.go, including Unmap, and cleanup functions. Did not succeed ie. serf gossip protocol which i run over udp complains that packages do not make across and blacklist other nodes in their memberlist. Either there must be other places to do this, or this should be also done on the remote nodes.

Normally this should be done via accessible "objects", but I have not found a suitable one, either in docker or as a golang import, and started by calling a command in the OS (which of course is not a portable solution, but one to check assumptions).

cleanup := func() error {
    // need to undo the iptables rules before we return
    if m.userlandProxy != nil {
        m.userlandProxy.Stop()
    }
    pm.forward(iptables.Delete, m.proto, hostIP, allocatedHostPort, containerIP.String(), containerPort)
    if err := pm.Allocator.ReleasePort(hostIP, m.proto, allocatedHostPort); err != nil {
        return err
    }
    exec.Command("/usr/sbin/conntrack", "-D", "-s", containerIP.String()).Run()
    return exec.Command("/usr/sbin/conntrack", "-F").Run()
}
@nmarasoiu

This comment has been minimized.

Show comment
Hide comment
@nmarasoiu

nmarasoiu May 27, 2015

Hi, any feedback on my attempt to start a way to fix this?

nmarasoiu commented May 27, 2015

Hi, any feedback on my attempt to start a way to fix this?

@grimmy

This comment has been minimized.

Show comment
Hide comment
@grimmy

grimmy May 27, 2015

I've been cheating locally when it happens by using "conntrack -F" next time it happens I'll try with just the specific ip address.

grimmy commented May 27, 2015

I've been cheating locally when it happens by using "conntrack -F" next time it happens I'll try with just the specific ip address.

@nmarasoiu

This comment has been minimized.

Show comment
Hide comment
@nmarasoiu

nmarasoiu May 27, 2015

hi,

but i called -F too, probably in the wrong place.

for sure only the local tablea need to be flushed, not the remote ones
right?

În data de miercuri, 27 mai 2015, grimmy notifications@github.com a scris:

I've been cheating locally when it happens by using "conntrack -F" next
time it happens I'll try with just the specific ip address.


Reply to this email directly or view it on GitHub
#8795 (comment).

nmarasoiu commented May 27, 2015

hi,

but i called -F too, probably in the wrong place.

for sure only the local tablea need to be flushed, not the remote ones
right?

În data de miercuri, 27 mai 2015, grimmy notifications@github.com a scris:

I've been cheating locally when it happens by using "conntrack -F" next
time it happens I'll try with just the specific ip address.


Reply to this email directly or view it on GitHub
#8795 (comment).

@grimmy

This comment has been minimized.

Show comment
Hide comment
@grimmy

grimmy May 27, 2015

I haven't had to do anything on the remote end. But I do have multiple containers talking to each other and external devices over UDP. The first time this happend (and I discovered that it was conntrack) was that there was a conntrack entry for an external device pointing to an old container. Doing "conntrack -F" cleared that and then the next packet from that external device made it to the correct container.

grimmy commented May 27, 2015

I haven't had to do anything on the remote end. But I do have multiple containers talking to each other and external devices over UDP. The first time this happend (and I discovered that it was conntrack) was that there was a conntrack entry for an external device pointing to an old container. Doing "conntrack -F" cleared that and then the next packet from that external device made it to the correct container.

@berglh

This comment has been minimized.

Show comment
Hide comment
@berglh

berglh Jun 9, 2015

So we're running StatsD in a Docker container on RHEL 7 and ran into this problem when the Docker service is restarted, which in turn restarts Docker. The UDP packet to StatsD were arriving on interface but not making it through to the container and IPTables wasn't blocking it, which led us to this thread.

The solution for us was to use conntrack to delete only the states for the things that are not working so that we have the least impact on existing states. In the SystemD unit file that launches the Docker container for StatsD, running a ExecStartPre with conntrack to delete the specific states that are UDP and 8125 has solved this problem for us. Running conntrack -F really seems a bit brute force for our requirements:

# grep -B1 run /etc/systemd/system/statsd.service 
ExecStartPre=/sbin/conntrack -D -p udp --orig-port-dst 8125
ExecStart=/usr/bin/docker run -p 8125:8125/udp -p 8126:8126 \

berglh commented Jun 9, 2015

So we're running StatsD in a Docker container on RHEL 7 and ran into this problem when the Docker service is restarted, which in turn restarts Docker. The UDP packet to StatsD were arriving on interface but not making it through to the container and IPTables wasn't blocking it, which led us to this thread.

The solution for us was to use conntrack to delete only the states for the things that are not working so that we have the least impact on existing states. In the SystemD unit file that launches the Docker container for StatsD, running a ExecStartPre with conntrack to delete the specific states that are UDP and 8125 has solved this problem for us. Running conntrack -F really seems a bit brute force for our requirements:

# grep -B1 run /etc/systemd/system/statsd.service 
ExecStartPre=/sbin/conntrack -D -p udp --orig-port-dst 8125
ExecStart=/usr/bin/docker run -p 8125:8125/udp -p 8126:8126 \
@grimmy

This comment has been minimized.

Show comment
Hide comment
@grimmy

grimmy Jun 9, 2015

Yes the -F has only been preformed on dev workstations and of course not in prod. This really just needs to be fixed in docker but @nmarasoiu hasn't had any success and I haven't had time to fix it either.

grimmy commented Jun 9, 2015

Yes the -F has only been preformed on dev workstations and of course not in prod. This really just needs to be fixed in docker but @nmarasoiu hasn't had any success and I haven't had time to fix it either.

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Jun 9, 2015

Member

ping @mavenugo @mrjana is this Libnetwork territory now? Wondering if progress was made in this area

Member

thaJeztah commented Jun 9, 2015

ping @mavenugo @mrjana is this Libnetwork territory now? Wondering if progress was made in this area

@tmichaud314

This comment has been minimized.

Show comment
Hide comment
@tmichaud314

tmichaud314 Aug 28, 2015

It's been several months w/o any posts. Is this issue being tracked elsewhere?

tmichaud314 commented Aug 28, 2015

It's been several months w/o any posts. Is this issue being tracked elsewhere?

@andyka

This comment has been minimized.

Show comment
Hide comment
@andyka

andyka Sep 9, 2015

I have a similar problem (UDP traffic from within container to world sometimes doesn't get routed back to container), however the workaround (conntrack -F) doesn't work. Note the work "sometimes" - it looks like it depends on destination IP, however I can't say for sure. The IPs I am using are in 10.0.0.0/8 range, so there is no collision there.

Workaround for me is using --net=host when running container, but would love a real solution and was hoping this issue would solve my problem too. Is anyone working on this?

andyka commented Sep 9, 2015

I have a similar problem (UDP traffic from within container to world sometimes doesn't get routed back to container), however the workaround (conntrack -F) doesn't work. Note the work "sometimes" - it looks like it depends on destination IP, however I can't say for sure. The IPs I am using are in 10.0.0.0/8 range, so there is no collision there.

Workaround for me is using --net=host when running container, but would love a real solution and was hoping this issue would solve my problem too. Is anyone working on this?

@nmarasoiu

This comment has been minimized.

Show comment
Hide comment
@nmarasoiu

nmarasoiu Sep 9, 2015

hi

i recall this happens in container restart right?

is it possible to delete the container and relaunch a fresh one ? hope
image still cached in some local registry? or u have data in container?
using mounts may enable u statelss container cause they are best used for
short term tasks and erased after.. just an idea

În Mi, 9 sept. 2015 la 08:34 andyka notifications@github.com a scris:

I have a similar problem (UDP traffic from within container to world
sometimes doesn't get routed back to container), however the workaround
(conntrack -F) doesn't work. Note the work "sometimes" - it looks like it
depends on destination IP, however I can't say for sure. The IPs I am using
are in 10.0.0.0/8 range, so there is no collision there.

Workaround for me is using --net=host when running container, but would
love a real solution. Is anyone working on this?


Reply to this email directly or view it on GitHub
#8795 (comment).

nmarasoiu commented Sep 9, 2015

hi

i recall this happens in container restart right?

is it possible to delete the container and relaunch a fresh one ? hope
image still cached in some local registry? or u have data in container?
using mounts may enable u statelss container cause they are best used for
short term tasks and erased after.. just an idea

În Mi, 9 sept. 2015 la 08:34 andyka notifications@github.com a scris:

I have a similar problem (UDP traffic from within container to world
sometimes doesn't get routed back to container), however the workaround
(conntrack -F) doesn't work. Note the work "sometimes" - it looks like it
depends on destination IP, however I can't say for sure. The IPs I am using
are in 10.0.0.0/8 range, so there is no collision there.

Workaround for me is using --net=host when running container, but would
love a real solution. Is anyone working on this?


Reply to this email directly or view it on GitHub
#8795 (comment).

fcrisciani added a commit to fcrisciani/docker that referenced this issue Apr 11, 2017

Adding test for moby/moby#8795
When a container was being destroyed was possible to have
flows in conntrack left behind on the host.
If a flow is present into the conntrack table, the packet
processing will skip the POSTROUTING table of iptables and
will use the information in conntrack to do the translation.
For this reason is possible that long lived flows created
towards a container that is destroyed, will actually affect
new flows incoming to the host, creating erroneous conditions
where traffic cannot reach new containers.
The fix takes care of cleaning them up when a container is
destroyed.

The test of this commit is actually reproducing the condition
where an UDP flow is established towards a container that is then
destroyed. The test verifies that the flow established is gone
after the container is destroyed.

Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>

fcrisciani added a commit to fcrisciani/docker that referenced this issue Apr 11, 2017

Vendoring Libnetwork library
- adding conntrack flush fix for moby/moby#8795

Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>

fcrisciani added a commit to fcrisciani/docker that referenced this issue Apr 11, 2017

Adding test for moby/moby#8795
When a container was being destroyed was possible to have
flows in conntrack left behind on the host.
If a flow is present into the conntrack table, the packet
processing will skip the POSTROUTING table of iptables and
will use the information in conntrack to do the translation.
For this reason is possible that long lived flows created
towards a container that is destroyed, will actually affect
new flows incoming to the host, creating erroneous conditions
where traffic cannot reach new containers.
The fix takes care of cleaning them up when a container is
destroyed.

The test of this commit is actually reproducing the condition
where an UDP flow is established towards a container that is then
destroyed. The test verifies that the flow established is gone
after the container is destroyed.

Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>

fcrisciani added a commit to fcrisciani/docker that referenced this issue Apr 11, 2017

Adding test for moby/moby#8795
When a container was being destroyed was possible to have
flows in conntrack left behind on the host.
If a flow is present into the conntrack table, the packet
processing will skip the POSTROUTING table of iptables and
will use the information in conntrack to do the translation.
For this reason is possible that long lived flows created
towards a container that is destroyed, will actually affect
new flows incoming to the host, creating erroneous conditions
where traffic cannot reach new containers.
The fix takes care of cleaning them up when a container is
destroyed.

The test of this commit is actually reproducing the condition
where an UDP flow is established towards a container that is then
destroyed. The test verifies that the flow established is gone
after the container is destroyed.

Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>

fcrisciani added a commit to fcrisciani/docker that referenced this issue Apr 11, 2017

Adding test for moby/moby#8795
When a container was being destroyed was possible to have
flows in conntrack left behind on the host.
If a flow is present into the conntrack table, the packet
processing will skip the POSTROUTING table of iptables and
will use the information in conntrack to do the translation.
For this reason is possible that long lived flows created
towards a container that is destroyed, will actually affect
new flows incoming to the host, creating erroneous conditions
where traffic cannot reach new containers.
The fix takes care of cleaning them up when a container is
destroyed.

The test of this commit is actually reproducing the condition
where an UDP flow is established towards a container that is then
destroyed. The test verifies that the flow established is gone
after the container is destroyed.

Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>

fcrisciani added a commit to fcrisciani/docker that referenced this issue Apr 11, 2017

Adding test for moby/moby#8795
When a container was being destroyed was possible to have
flows in conntrack left behind on the host.
If a flow is present into the conntrack table, the packet
processing will skip the POSTROUTING table of iptables and
will use the information in conntrack to do the translation.
For this reason is possible that long lived flows created
towards a container that is destroyed, will actually affect
new flows incoming to the host, creating erroneous conditions
where traffic cannot reach new containers.
The fix takes care of cleaning them up when a container is
destroyed.

The test of this commit is actually reproducing the condition
where an UDP flow is established towards a container that is then
destroyed. The test verifies that the flow established is gone
after the container is destroyed.

Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>

fcrisciani added a commit to fcrisciani/docker that referenced this issue Apr 11, 2017

Adding test for moby/moby#8795
When a container was being destroyed was possible to have
flows in conntrack left behind on the host.
If a flow is present into the conntrack table, the packet
processing will skip the POSTROUTING table of iptables and
will use the information in conntrack to do the translation.
For this reason is possible that long lived flows created
towards a container that is destroyed, will actually affect
new flows incoming to the host, creating erroneous conditions
where traffic cannot reach new containers.
The fix takes care of cleaning them up when a container is
destroyed.

The test of this commit is actually reproducing the condition
where an UDP flow is established towards a container that is then
destroyed. The test verifies that the flow established is gone
after the container is destroyed.

Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>

@thaJeztah thaJeztah closed this in #32505 Apr 11, 2017

@Hermain

This comment has been minimized.

Show comment
Hide comment
@Hermain

Hermain May 5, 2017

I ran into the same issues with docker swarm. I send gelf logs to logstash and after logstash is restarted logs are not reaching the logstash anymore. Still in need of a fix (instead of the proposed workarounds) that would support multiple instances of logstash and whenever a logstash container dies/is restarted no more udp packages should be sent to it (ip address cache removed)

Hermain commented May 5, 2017

I ran into the same issues with docker swarm. I send gelf logs to logstash and after logstash is restarted logs are not reaching the logstash anymore. Still in need of a fix (instead of the proposed workarounds) that would support multiple instances of logstash and whenever a logstash container dies/is restarted no more udp packages should be sent to it (ip address cache removed)

@fcrisciani

This comment has been minimized.

Show comment
Hide comment
@fcrisciani

fcrisciani May 5, 2017

Contributor

@Hermain can you please post which is the docker version that you are using?

Contributor

fcrisciani commented May 5, 2017

@Hermain can you please post which is the docker version that you are using?

@Hermain

This comment has been minimized.

Show comment
Hide comment
@Hermain

Hermain May 8, 2017

I am using version 17.03.1-ce.

Hermain commented May 8, 2017

I am using version 17.03.1-ce.

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah May 8, 2017

Member

@Hermain this fix is part of docker 17.05, so that's expected as 17.03.1 does not have this fix yet

Member

thaJeztah commented May 8, 2017

@Hermain this fix is part of docker 17.05, so that's expected as 17.03.1 does not have this fix yet

@thaJeztah thaJeztah added this to the 17.05.0 milestone May 8, 2017

@Hermain

This comment has been minimized.

Show comment
Hide comment
@Hermain

Hermain May 8, 2017

Great I'll test it with 17.05.0 once its released.

Hermain commented May 8, 2017

Great I'll test it with 17.05.0 once its released.

@Hermain

This comment has been minimized.

Show comment
Hide comment
@Hermain

Hermain May 8, 2017

Does this fix also apply to a swarm? Meaning when one node dies or a container dies and then a repica is created on a new node the packets will be forwarded to it?

Hermain commented May 8, 2017

Does this fix also apply to a swarm? Meaning when one node dies or a container dies and then a repica is created on a new node the packets will be forwarded to it?

@fcrisciani

This comment has been minimized.

Show comment
Hide comment
@fcrisciani

fcrisciani May 8, 2017

Contributor

@Hermain yep, should cover also swarm mode

Contributor

fcrisciani commented May 8, 2017

@Hermain yep, should cover also swarm mode

@Hermain

This comment has been minimized.

Show comment
Hide comment
@Hermain

Hermain May 31, 2017

I tested this on ubuntu with a one node swarm. I have logstash listening to gelf messages on localhost created by a container:

docker run --log-driver=gelf --log-opt gelf-address=udp://127.0.0.1:12201 ubuntu /bin/sh -c 'while true; do date "+%d-%m-%Y %H:%M:%S:%3N" | xar printf "%s %s | 51c489da-2ba7-466e-abe1-14c236de54c5 | INFO | HostingLoggerExtensions.RequestFinished    | Request finished in 35.1624ms 200 application/json; charset=utf-8 message end\n"; sleep 1 ; done'

When I docker kill logstash while the log generator is running it takes ~ 5 minutes until the first udp messages reach logstash.

If I start the log generator after logstash everythings works perfectly right of the bat. To me it seems like the bug still exists.

I use Docker version 17.05.0-ce, build 89658be

Hermain commented May 31, 2017

I tested this on ubuntu with a one node swarm. I have logstash listening to gelf messages on localhost created by a container:

docker run --log-driver=gelf --log-opt gelf-address=udp://127.0.0.1:12201 ubuntu /bin/sh -c 'while true; do date "+%d-%m-%Y %H:%M:%S:%3N" | xar printf "%s %s | 51c489da-2ba7-466e-abe1-14c236de54c5 | INFO | HostingLoggerExtensions.RequestFinished    | Request finished in 35.1624ms 200 application/json; charset=utf-8 message end\n"; sleep 1 ; done'

When I docker kill logstash while the log generator is running it takes ~ 5 minutes until the first udp messages reach logstash.

If I start the log generator after logstash everythings works perfectly right of the bat. To me it seems like the bug still exists.

I use Docker version 17.05.0-ce, build 89658be

@ggaugry

This comment has been minimized.

Show comment
Hide comment
@ggaugry

ggaugry May 31, 2017

I also tested it with docker version 17.05-ce and still have the issue. We are doing some tcpdump capture inside a container and all source IPs are modified by the docker interface IP

ggaugry commented May 31, 2017

I also tested it with docker version 17.05-ce and still have the issue. We are doing some tcpdump capture inside a container and all source IPs are modified by the docker interface IP

@fcrisciani

This comment has been minimized.

Show comment
Hide comment
@fcrisciani

fcrisciani Jun 1, 2017

Contributor

@Hermain tried the following:
log generator container: docker run --log-driver=gelf --log-opt gelf-address=udp://127.0.0.1:5000 ubuntu /bin/sh -c 'while true; do date "+%d-%m-%Y %H:%M:%S:%3N" | xargs printf "%s %s | 51c489da-2ba7-466e-abe1-14c236de54c5 | INFO | HostingLoggerExtensions.RequestFinished | Request finished in 35.1624ms 200 application/json; charset=utf-8 message end\n"; sleep 1 ; done'

Daemon logs complains that there is destination for the logs:

ERRO[3031] Failed to log msg "" for logger gelf: gelf: cannot send GELF message: write udp 127.0.0.1:33019->127.0.0.1:5000: write: connection refused 
ERRO[3033] Failed to log msg "" for logger gelf: gelf: cannot send GELF message: write udp 127.0.0.1:33019->127.0.0.1:5000: write: connection refused 
ERRO[3035] Failed to log msg "" for logger gelf: gelf: cannot send GELF message: write udp 127.0.0.1:33019->127.0.0.1:5000: write: connection refused 

udp sync container (simple image that has networking tools): docker run -d --name dst -p5000:5000/udp nicolaka/netshoot top
run tcpdump: docker exec -it dst tcpdump -eni eth0 udp and port 5000

output like:

01:03:45.183365 02:42:74:ca:af:79 > 02:42:ac:11:00:02, ethertype IPv4 (0x0800), length 560: 172.17.0.1.59017 > 172.17.0.2.5000: UDP, length 518
01:03:46.187920 02:42:74:ca:af:79 > 02:42:ac:11:00:02, ethertype IPv4 (0x0800), length 560: 172.17.0.1.49224 > 172.17.0.2.5000: UDP, length 518
01:03:47.193422 02:42:74:ca:af:79 > 02:42:ac:11:00:02, ethertype IPv4 (0x0800), length 561: 172.17.0.1.39521 > 172.17.0.2.5000: UDP, length 519
01:03:48.197281 02:42:74:ca:af:79 > 02:42:ac:11:00:02, ethertype IPv4 (0x0800), length 560: 172.17.0.1.43709 > 172.17.0.2.5000: UDP, length 518

Logs are coming correctly

Now the steps that you were mentioning:
docker kill dst
In the daemon logs (you have to set it in debug mode) I see the cleanup of the conntrack flows as expected from this patch:

DEBU[3315] DeleteConntrackEntries purged ipv4:31, ipv6:0 

and again the daemon complains:

ERRO[3316] Failed to log msg "" for logger gelf: gelf: cannot send GELF message: write udp 127.0.0.1:33019->127.0.0.1:5000: write: connection refused 
ERRO[3318] Failed to log msg "" for logger gelf: gelf: cannot send GELF message: write udp 127.0.0.1:33019->127.0.0.1:5000: write: connection refused 
ERRO[3320] Failed to log msg "" for logger gelf: gelf: cannot send GELF message: write udp 127.0.0.1:33019->127.0.0.1:5000: write: connection refused 
ERRO[3322] Failed to log msg "" for logger gelf: gelf: cannot send GELF message: write udp 127.0.0.1:33019->127.0.0.1:5000: write: connection refused 

now: docker start dst
daemon stops posting errors
run again: docker exec -it dst tcpdump -eni eth0 udp and port 5000
and see the packets coming correctly:

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
01:09:11.202470 02:42:74:ca:af:79 > 02:42:ac:11:00:02, ethertype IPv4 (0x0800), length 560: 172.17.0.1.50807 > 172.17.0.2.5000: UDP, length 518
01:09:12.207231 02:42:74:ca:af:79 > 02:42:ac:11:00:02, ethertype IPv4 (0x0800), length 560: 172.17.0.1.47575 > 172.17.0.2.5000: UDP, length 518
Contributor

fcrisciani commented Jun 1, 2017

@Hermain tried the following:
log generator container: docker run --log-driver=gelf --log-opt gelf-address=udp://127.0.0.1:5000 ubuntu /bin/sh -c 'while true; do date "+%d-%m-%Y %H:%M:%S:%3N" | xargs printf "%s %s | 51c489da-2ba7-466e-abe1-14c236de54c5 | INFO | HostingLoggerExtensions.RequestFinished | Request finished in 35.1624ms 200 application/json; charset=utf-8 message end\n"; sleep 1 ; done'

Daemon logs complains that there is destination for the logs:

ERRO[3031] Failed to log msg "" for logger gelf: gelf: cannot send GELF message: write udp 127.0.0.1:33019->127.0.0.1:5000: write: connection refused 
ERRO[3033] Failed to log msg "" for logger gelf: gelf: cannot send GELF message: write udp 127.0.0.1:33019->127.0.0.1:5000: write: connection refused 
ERRO[3035] Failed to log msg "" for logger gelf: gelf: cannot send GELF message: write udp 127.0.0.1:33019->127.0.0.1:5000: write: connection refused 

udp sync container (simple image that has networking tools): docker run -d --name dst -p5000:5000/udp nicolaka/netshoot top
run tcpdump: docker exec -it dst tcpdump -eni eth0 udp and port 5000

output like:

01:03:45.183365 02:42:74:ca:af:79 > 02:42:ac:11:00:02, ethertype IPv4 (0x0800), length 560: 172.17.0.1.59017 > 172.17.0.2.5000: UDP, length 518
01:03:46.187920 02:42:74:ca:af:79 > 02:42:ac:11:00:02, ethertype IPv4 (0x0800), length 560: 172.17.0.1.49224 > 172.17.0.2.5000: UDP, length 518
01:03:47.193422 02:42:74:ca:af:79 > 02:42:ac:11:00:02, ethertype IPv4 (0x0800), length 561: 172.17.0.1.39521 > 172.17.0.2.5000: UDP, length 519
01:03:48.197281 02:42:74:ca:af:79 > 02:42:ac:11:00:02, ethertype IPv4 (0x0800), length 560: 172.17.0.1.43709 > 172.17.0.2.5000: UDP, length 518

Logs are coming correctly

Now the steps that you were mentioning:
docker kill dst
In the daemon logs (you have to set it in debug mode) I see the cleanup of the conntrack flows as expected from this patch:

DEBU[3315] DeleteConntrackEntries purged ipv4:31, ipv6:0 

and again the daemon complains:

ERRO[3316] Failed to log msg "" for logger gelf: gelf: cannot send GELF message: write udp 127.0.0.1:33019->127.0.0.1:5000: write: connection refused 
ERRO[3318] Failed to log msg "" for logger gelf: gelf: cannot send GELF message: write udp 127.0.0.1:33019->127.0.0.1:5000: write: connection refused 
ERRO[3320] Failed to log msg "" for logger gelf: gelf: cannot send GELF message: write udp 127.0.0.1:33019->127.0.0.1:5000: write: connection refused 
ERRO[3322] Failed to log msg "" for logger gelf: gelf: cannot send GELF message: write udp 127.0.0.1:33019->127.0.0.1:5000: write: connection refused 

now: docker start dst
daemon stops posting errors
run again: docker exec -it dst tcpdump -eni eth0 udp and port 5000
and see the packets coming correctly:

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
01:09:11.202470 02:42:74:ca:af:79 > 02:42:ac:11:00:02, ethertype IPv4 (0x0800), length 560: 172.17.0.1.50807 > 172.17.0.2.5000: UDP, length 518
01:09:12.207231 02:42:74:ca:af:79 > 02:42:ac:11:00:02, ethertype IPv4 (0x0800), length 560: 172.17.0.1.47575 > 172.17.0.2.5000: UDP, length 518
@fcrisciani

This comment has been minimized.

Show comment
Hide comment
@fcrisciani

fcrisciani Jun 1, 2017

Contributor

@ggaugry when you use routing mesh the rewrite of the source IP address is expected to guarantee the return flow is working properly. Can you detail better what is the issue that you are referring to?

Contributor

fcrisciani commented Jun 1, 2017

@ggaugry when you use routing mesh the rewrite of the source IP address is expected to guarantee the return flow is working properly. Can you detail better what is the issue that you are referring to?

@Hermain

This comment has been minimized.

Show comment
Hide comment
@Hermain

Hermain Jun 14, 2017

@fcrisciani any updates since you reproduced the issue? This bug still exists...

Hermain commented Jun 14, 2017

@fcrisciani any updates since you reproduced the issue? This bug still exists...

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Jun 14, 2017

Member

@Hermain Looking at the comment above, @fcrisciani was not able to reproduce; udp traffic stopped when the dst container was killed, and started again when the dst container was started. Can you give more details? Exact steps to reproduce?

Member

thaJeztah commented Jun 14, 2017

@Hermain Looking at the comment above, @fcrisciani was not able to reproduce; udp traffic stopped when the dst container was killed, and started again when the dst container was started. Can you give more details? Exact steps to reproduce?

@fcrisciani

This comment has been minimized.

Show comment
Hide comment
@fcrisciani

fcrisciani Jun 14, 2017

Contributor

@Hermain the issue that you are experiencing is most likely the one fixed by this PR: docker/libnetwork#1792 the 17.06-rc3 is out, you should try with that image to confirm that the issue is fixed.

That 5min delay to reconcile is exactly matching with the expiration time of the mac entry

Contributor

fcrisciani commented Jun 14, 2017

@Hermain the issue that you are experiencing is most likely the one fixed by this PR: docker/libnetwork#1792 the 17.06-rc3 is out, you should try with that image to confirm that the issue is fixed.

That 5min delay to reconcile is exactly matching with the expiration time of the mac entry

@Hermain

This comment has been minimized.

Show comment
Hide comment
@Hermain

Hermain Oct 4, 2017

The issue still persists. I now used the same tools as @fcrisciani to reproduce it again independently of logstash.

I have a one node docker swarm and use nicolaka/netshoot which I start with docker stack deploy and the following compose file:

version: "3.1"
services:
  udpReceiver:
    image: nicolaka/netshoot
    ports:
      - "127.0.0.1:12201:12201/udp"
    command: tcpdump -eni any udp and port 12201

If I now generate logs with docker run:

docker run --log-driver=gelf --log-opt gelf-address=udp://127.0.0.1:12201  ubuntu /bin/sh -c 'COUNTER=1;while true; do date "+%Y-%m-%d %H:%M:%S.%3N" | xargs printf "%s %s | 51c489da-2ba7-466e-abe1-14c236de54c5 | INFO | HostingLoggerExtensions.RequestFinished    | $COUNTER\n"; COUNTER=$((COUNTER+1)); sleep 1; done' 

and user docker logs on the netshoot container. I can see that it receives udp packages as expected. I leave the log generating container running and sending logs.

If I now docker kill ${netStatContainerId} docker swarm will spin up a new netstat and the logs from the log generator will not reach it. (I test this by doing docker logs on the new container and waiting for a minute --> Nothing happens)

If I stop the log generator and start a new one those logs reach the application.

Looks like this bug right?

Docker version returns:

Client:
 Version:      17.09.0-ce
 API version:  1.32
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:        Tue Sep 26 22:42:18 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.09.0-ce
 API version:  1.32 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:        Tue Sep 26 22:40:56 2017
 OS/Arch:      linux/amd64
 Experimental: false

Hermain commented Oct 4, 2017

The issue still persists. I now used the same tools as @fcrisciani to reproduce it again independently of logstash.

I have a one node docker swarm and use nicolaka/netshoot which I start with docker stack deploy and the following compose file:

version: "3.1"
services:
  udpReceiver:
    image: nicolaka/netshoot
    ports:
      - "127.0.0.1:12201:12201/udp"
    command: tcpdump -eni any udp and port 12201

If I now generate logs with docker run:

docker run --log-driver=gelf --log-opt gelf-address=udp://127.0.0.1:12201  ubuntu /bin/sh -c 'COUNTER=1;while true; do date "+%Y-%m-%d %H:%M:%S.%3N" | xargs printf "%s %s | 51c489da-2ba7-466e-abe1-14c236de54c5 | INFO | HostingLoggerExtensions.RequestFinished    | $COUNTER\n"; COUNTER=$((COUNTER+1)); sleep 1; done' 

and user docker logs on the netshoot container. I can see that it receives udp packages as expected. I leave the log generating container running and sending logs.

If I now docker kill ${netStatContainerId} docker swarm will spin up a new netstat and the logs from the log generator will not reach it. (I test this by doing docker logs on the new container and waiting for a minute --> Nothing happens)

If I stop the log generator and start a new one those logs reach the application.

Looks like this bug right?

Docker version returns:

Client:
 Version:      17.09.0-ce
 API version:  1.32
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:        Tue Sep 26 22:42:18 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.09.0-ce
 API version:  1.32 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:        Tue Sep 26 22:40:56 2017
 OS/Arch:      linux/amd64
 Experimental: false
@Hermain

This comment has been minimized.

Show comment
Hide comment
@Hermain

Hermain Oct 6, 2017

Or an even easier way to reproduce the problem:
Start the log generator from my last post, start the log receiver --> Logs will never reach the receiver as witnessed with docker service logs udpReceiver.
With 17.05 at least it recovered after 5 minutes now it never recovers.

Hermain commented Oct 6, 2017

Or an even easier way to reproduce the problem:
Start the log generator from my last post, start the log receiver --> Logs will never reach the receiver as witnessed with docker service logs udpReceiver.
With 17.05 at least it recovered after 5 minutes now it never recovers.

@amithgeorge

This comment has been minimized.

Show comment
Hide comment
@amithgeorge

amithgeorge Jan 3, 2018

Any updates on this? We are seeing the same issue with web app containers sending udp messages to a rsyslog container. If the rsyslog container is killed and started again, the web app containers also need to be killed and started again for the udp messages to reach the rsyslog container. This is super weird. We are still on 17.05.0-ce, build 89658be. Unlike with what @Hermain posted, it doesn't work even after 5 mins. Only a stop/start fixes this.

amithgeorge commented Jan 3, 2018

Any updates on this? We are seeing the same issue with web app containers sending udp messages to a rsyslog container. If the rsyslog container is killed and started again, the web app containers also need to be killed and started again for the udp messages to reach the rsyslog container. This is super weird. We are still on 17.05.0-ce, build 89658be. Unlike with what @Hermain posted, it doesn't work even after 5 mins. Only a stop/start fixes this.

@levesquejf

This comment has been minimized.

Show comment
Hide comment
@levesquejf

levesquejf Apr 30, 2018

I have the same issue with version 17.12.1-ce, build 3dfb8343b139d6342acfd9975d7f1068b5b1c3d3 running on AWS ECS. An outside host is sending UDP packets to the container every 500ms. Before I restart the container, everything is running fine. Once the container is restarted, I have around 50% of packet loss. When I have the issue, by using tcpdump inside the container, I see all the ingress and egress packets. However, when I run tcpdump on the docker host, I see half of the packet coming from the container.

The workaround conntrack -F is working for me.

levesquejf commented Apr 30, 2018

I have the same issue with version 17.12.1-ce, build 3dfb8343b139d6342acfd9975d7f1068b5b1c3d3 running on AWS ECS. An outside host is sending UDP packets to the container every 500ms. Before I restart the container, everything is running fine. Once the container is restarted, I have around 50% of packet loss. When I have the issue, by using tcpdump inside the container, I see all the ingress and egress packets. However, when I run tcpdump on the docker host, I see half of the packet coming from the container.

The workaround conntrack -F is working for me.

@praseodym

This comment has been minimized.

Show comment
Hide comment
@praseodym

praseodym Apr 30, 2018

I see the same behaviour; the conntrack table is not flushed on a container restart so packets will be effectively blackholed until it is.

Only UDP ‘connections’ with the same source IP+port pair are affected, so not everyone will be hitting this bug.

praseodym commented Apr 30, 2018

I see the same behaviour; the conntrack table is not flushed on a container restart so packets will be effectively blackholed until it is.

Only UDP ‘connections’ with the same source IP+port pair are affected, so not everyone will be hitting this bug.

@chrisxaustin

This comment has been minimized.

Show comment
Hide comment
@chrisxaustin

chrisxaustin Apr 30, 2018

I saw this behaviour on an instance that received ~4k syslog messages per second.
This didn't only happen after a restart though, the container would stop seeing traffic from some of the sources until I used conntrack to clear the table. Tcpdump on the host showed the traffic, but the container never saw it.

I've stopped using docker for that particular service since I can't afford to lose logs, and I couldn't find a solution.

chrisxaustin commented Apr 30, 2018

I saw this behaviour on an instance that received ~4k syslog messages per second.
This didn't only happen after a restart though, the container would stop seeing traffic from some of the sources until I used conntrack to clear the table. Tcpdump on the host showed the traffic, but the container never saw it.

I've stopped using docker for that particular service since I can't afford to lose logs, and I couldn't find a solution.

@levesquejf

This comment has been minimized.

Show comment
Hide comment
@levesquejf

levesquejf May 1, 2018

@fcrisciani I understand you worked on this last year. Since this issue is currently closed but still present in 17.12.1, would it be better to have a new issue created? Is there any information you need to reproduce the issue? Let me know if I can help to get that fixed.

levesquejf commented May 1, 2018

@fcrisciani I understand you worked on this last year. Since this issue is currently closed but still present in 17.12.1, would it be better to have a new issue created? Is there any information you need to reproduce the issue? Let me know if I can help to get that fixed.

@mman

This comment has been minimized.

Show comment
Hide comment
@mman

mman May 3, 2018

Since this issue is real, I'm attaching my least invasive solution I have found to work rather reliably, use nohup(1) or any other system dependent mechanism to keep this script running in the background and watch container starting up to clean conntrack entries corresponding to given container name that exposes given UDP port. Modify the script to adjust c and p appropriately.

#!/bin/bash

export PATH=/bin:/usr/bin:/sbin:/usr/sbin

# modify c and p to match your container name and UDP port
c=YOUR_CONTAINER_NAME
p=12345

docker events --filter type=container --filter event=start --filter container=$c | while read
do
    logger "$c restarted"
    conntrack -D -p udp --orig-port-dst $p 2>&1 >/dev/null
done

Since docker touches iptables on Linux host when containers are created, I do believe that it should also properly cleanup conntrack mappings belonging to the container.

mman commented May 3, 2018

Since this issue is real, I'm attaching my least invasive solution I have found to work rather reliably, use nohup(1) or any other system dependent mechanism to keep this script running in the background and watch container starting up to clean conntrack entries corresponding to given container name that exposes given UDP port. Modify the script to adjust c and p appropriately.

#!/bin/bash

export PATH=/bin:/usr/bin:/sbin:/usr/sbin

# modify c and p to match your container name and UDP port
c=YOUR_CONTAINER_NAME
p=12345

docker events --filter type=container --filter event=start --filter container=$c | while read
do
    logger "$c restarted"
    conntrack -D -p udp --orig-port-dst $p 2>&1 >/dev/null
done

Since docker touches iptables on Linux host when containers are created, I do believe that it should also properly cleanup conntrack mappings belonging to the container.

@fcrisciani

This comment has been minimized.

Show comment
Hide comment
@fcrisciani

fcrisciani May 10, 2018

Contributor

@Hermain @mman looking into this, as far as I can see the problem seems to be in the state maintained inside the ipvs connection table.
As for now if you enter in the ingress sandbox: nsenter --net=/var/run/docker/netns/ingress_sbox of each single node and you enable this knob: echo 1 > /proc/sys/net/ipv4/vs/expire_nodest_conn, this will allow ipvs to automatically purge the connection once the ipvs backend is not anymore available. With that the reproduction mentioned in #8795 (comment) seems to work properly. Still anyway working on making sure that that is enough

Contributor

fcrisciani commented May 10, 2018

@Hermain @mman looking into this, as far as I can see the problem seems to be in the state maintained inside the ipvs connection table.
As for now if you enter in the ingress sandbox: nsenter --net=/var/run/docker/netns/ingress_sbox of each single node and you enable this knob: echo 1 > /proc/sys/net/ipv4/vs/expire_nodest_conn, this will allow ipvs to automatically purge the connection once the ipvs backend is not anymore available. With that the reproduction mentioned in #8795 (comment) seems to work properly. Still anyway working on making sure that that is enough

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah May 14, 2018

Member

@fcrisciani should we reopen this issue, or is this a different cause as the original one and should we have a new issue for tracking?

Member

thaJeztah commented May 14, 2018

@fcrisciani should we reopen this issue, or is this a different cause as the original one and should we have a new issue for tracking?

@levesquejf

This comment has been minimized.

Show comment
Hide comment
@levesquejf

levesquejf Jul 26, 2018

@thaJeztah @fcrisciani Is the referenced issue docker/libnetwork#2154 fixing the UDP packet loss issue after container restart?

levesquejf commented Jul 26, 2018

@thaJeztah @fcrisciani Is the referenced issue docker/libnetwork#2154 fixing the UDP packet loss issue after container restart?

@fcrisciani

This comment has been minimized.

Show comment
Hide comment
@fcrisciani

fcrisciani Jul 26, 2018

Contributor

@levesquejf also this one is needed: docker/libnetwork#2243

Contributor

fcrisciani commented Jul 26, 2018

@levesquejf also this one is needed: docker/libnetwork#2243

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment