Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DNS resolving stops working #1132

Closed
carn1x opened this issue Jan 12, 2017 · 40 comments

Comments

@carn1x
Copy link

commented Jan 12, 2017

Expected behavior

A load test client container can maintain a high rate of requests indefinitely

Actual behavior

A load test client container can maintain a high rate of requests only for a short amount of time (5-10 minutes) before eventually no longer being able to resolve DNS. This same behaviour then spreads to all other containers already running or created after the issue occurs.

Information

Docker for Mac: version: 1.13.0-rc5-beta35 (25211e84d)
OS X: version 10.11.6 (build: 15G1108)
Diagnose tool whilst during this state outputs:

failure: Optional("diagnostic: response is not valid JSON")

Steps to reproduce the behavior

  1. Run an endless load test of many parallel clients (achieved with Locustio with 400 concurrent users, which runs multiple threads each firing requests using the python request library) for 5-10 minutes against an external (non-docker) webserver. Eventually requests will begin failing with no response.

  2. Docker exec into the client container, any other running container, or run a new container and try to perform a DNS lookup and observe failure:

$ docker run -it ubuntu:14.04
root@8c1afbd8224f:/# ping archive.ubuntu.com
ping: unknown host archive.ubuntu.com
  1. Add a hosts file entry within an affected container, observe connectivity:
root@8c1afbd8224f:/# echo '91.189.88.161 archive.ubuntu.com' > /etc/hosts
root@8c1afbd8224f:/# ping archive.ubuntu.com
PING archive.ubuntu.com (91.189.88.161) 56(84) bytes of data.
64 bytes from archive.ubuntu.com (91.189.88.161): icmp_seq=1 ttl=37 time=0.344 ms

Steps to workaround

  1. Restart Docker for Mac

  2. Diagnose tool now outputs:

Docker for Mac: version: 1.13.0-rc5-beta35 (25211e84d)
OS X: version 10.11.6 (build: 15G1108)
logs: /tmp/D750656E-BCA6-4A72-81A9-9EFC44BA71C8/20170112-135724.tar.gz
[OK]     vmnetd
[OK]     dns
[OK]     driver.amd64-linux
[OK]     virtualization VT-X
[OK]     app
[OK]     moby
[OK]     system
[OK]     moby-syslog
[OK]     db
[OK]     env
[OK]     virtualization kern.hv_support
[OK]     slirp
[OK]     osxfs
[OK]     moby-console
[OK]     logs
[OK]     docker-cli
[OK]     menubar
[OK]     disk
  1. New containers have connectivity:
root@b0ab6a608c29:/# ping archive.ubuntu.com
PING archive.ubuntu.com (91.189.88.152) 56(84) bytes of data.
64 bytes from steelix.canonical.com (91.189.88.152): icmp_seq=1 ttl=37 time=0.730 ms
@Vanuan

This comment has been minimized.

Copy link

commented Jan 12, 2017

Are you on VPN? Is it similar to #997?

@carn1x

This comment has been minimized.

Copy link
Author

commented Jan 12, 2017

I do regularly run on an IPsec VPN so I've just tested again and the same issue occurs with VPN disabled

@laffuste

This comment has been minimized.

Copy link

commented Jan 13, 2017

Hi, colleague of carn1x here, observing the same error.

After a cooldown time for the load test container affected, starting any other containers shows:

docker: Error response from daemon: driver failed programming external connectivity on endpoint CONTAINER_NAME (b4cc2d99e1e22e947dc6eaca94e12bd551fb93dde7cb762c7668a6f6276692dc): Error starting userland proxy: Bind for 0.0.0.0:5555: unexpected error Hostnet.Host_uwt.Sockets.Too_many_connections.

It appears to be originating from this line:
https://github.com/docker/vpnkit/blob/master/src/hostnet/host_uwt.ml#L95

@dsheets

This comment has been minimized.

Copy link
Contributor

commented Jan 17, 2017

Thanks for the report! I have escalated this issue to our networking team.

@dtanase

This comment has been minimized.

Copy link

commented Jan 31, 2017

I have the same issue as @laffuste and the only way around it is to restart docker. I am using Docker for Mac (stable) Version 1.13.0 (15072). Thank you!

@kevdowney

This comment has been minimized.

Copy link

commented Feb 17, 2017

+1 for this issue.

In VPN most notably (Cisco AnyConnect) container is making request outbound that don't close. This issue is not as pronounced outside of VPN.

Connections are remaining in CLOSED and FIN_WAIT2 statuses

$ sudo lsof -i -P -n|grep com.dock

com.docke 45582        kdowney  153u  IPv4 0x63aa6841fb399345      0t0  TCP 172.19.142.212:63444->23.209.176.27:443 (FIN_WAIT_2)
com.docke 45582        kdowney  154u  IPv4 0x63aa684222a9dc3d      0t0  TCP 172.19.142.212:63448->23.213.69.112:443 (FIN_WAIT_2)
com.docke 45582        kdowney  155u  IPv4 0x63aa684211710345      0t0  TCP 172.19.142.212:63701->23.213.69.112:443 (FIN_WAIT_2)
com.docke 45582        kdowney  156u  IPv4 0x63aa6841f51c9a4d      0t0  TCP 172.19.142.212:63454->54.231.40.83:443 (CLOSED)
com.docke 45582        kdowney  157u  IPv4 0x63aa684210e15e2d      0t0  TCP 172.19.142.212:63498->23.209.176.27:443 (FIN_WAIT_2)
com.docke 45582        kdowney  158u  IPv4 0x63aa684211fa5e75      0t0  UDP *:49557
com.docke 45582        kdowney  159u  IPv4 0x63aa684210e14c3d      0t0  TCP 172.19.142.212:63672->23.209.176.27:443 (FIN_WAIT_2)
com.docke 45582        kdowney  160u  IPv4 0x63aa68421eeb801d      0t0  TCP 172.19.142.212:63708->23.213.69.112:443 (FIN_WAIT_2)
com.docke 45582        kdowney  161u  IPv4 0x63aa68421eaa3345      0t0  TCP 172.19.142.212:63626->54.231.49.8:443 (CLOSED)

BTW, lookups are public addresses:
54.231.40.83 -> s3-1-w.amazonaws.com

These unclosed connections buildup over time

$ sudo lsof -i -P -n|grep com.dock|wc -l

     173

When the count reaches close to 900 we get the error Hostnet.Host_uwt.Sockets.Too_many_connections as we understood this is from default limit in Docker is 900.

The only solution is to restart Docker app. This is blocking lots of developers from effectively using docker for local development.

Is there a fix coming?

Diagnostic ID: 2A1C5824-4B7E-4E2B-B1CB-426D8499C6D2

$ docker info

Containers: 8
 Running: 2
 Paused: 0
 Stopped: 6
Images: 20
Server Version: 1.13.1
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 139
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host ipvlan macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1
runc version: 9df8b306d01f59d3a8029be411de015b7304dd8f
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 4.9.8-moby
Operating System: Alpine Linux v3.5
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.787 GiB
Name: moby
ID: 2OAV:6UBN:DQWC:BMUF:NHBN:V52V:QMTG:6LXS:SGWG:KBBC:NXJP:PJGX
Docker Root Dir: /var/lib/docker
Debug Mode (client): true
Debug Mode (server): true
 File Descriptors: 33
 Goroutines: 40
 System Time: 2017-02-17T19:56:46.170042147Z
 EventsListeners: 1
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
 ec2-54-146-21-195.compute-1.amazonaws.com:10006
 127.0.0.0/8
Live Restore Enabled: false
$ docker version
Client:
 Version:      1.13.1
 API version:  1.26
 Go version:   go1.7.5
 Git commit:   092cba3
 Built:        Wed Feb  8 08:47:51 2017
 OS/Arch:      darwin/amd64

Server:
 Version:      1.13.1
 API version:  1.26 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   092cba3
 Built:        Wed Feb  8 08:47:51 2017
 OS/Arch:      linux/amd64
 Experimental: true
$ docker-compose version
docker-compose version 1.11.1, build 7c5d5e4
docker-py version: 2.0.2
CPython version: 2.7.12
OpenSSL version: OpenSSL 1.0.2j  26 Sep 2016
@lcardito

This comment has been minimized.

Copy link

commented Feb 20, 2017

+1

It appears to manifest on docker stable channel now:

Version 1.13.1 (15353) Channel: Stable 94675c5a76

docker-compose version 1.11.1, build 7c5d5e4

System Version: OS X 10.11.6 (15G1217) Kernel Version: Darwin 15.6.0

@djs55

This comment has been minimized.

Copy link
Contributor

commented Feb 20, 2017

The Mac has 2 connection / file descriptor limits: a per-process limit and a global limit.

On my system the limits are:

$ sysctl -a | grep kern.maxfiles
kern.maxfiles: 12288
kern.maxfilesperproc: 10240

Docker for Mac is quite conservative by default and imposes a lower limit of 900, stored here:

$ cd ~/Library/Containers/com.docker.docker/Data/database/
$ git reset --hard
HEAD is now at a1732e4 Settings Changed 20 Feb 17 11:06 +0000
$ cat com.docker.driver.amd64-linux/slirp/max-connections 
900

Perhaps this limit is too conservative.

It's possible to increase the Docker for Mac limit by: (for example)

$ echo 2000 > com.docker.driver.amd64-linux/slirp/max-connections 
$ git add com.docker.driver.amd64-linux/slirp/max-connections 
$ git commit -s -m 'increase fd limit'
[master 2c6e824] increase fd limit
 1 file changed, 1 insertion(+), 1 deletion(-)

The VM may need to restart at this point. If you experiment with this, let me know what happens.

It's important not to hit the global limit otherwise other software could start to fail. However I believe it is possible to increase kern.maxfiles, see for example:

@ajkerr

This comment has been minimized.

Copy link

commented Apr 19, 2017

Boosting the limit only postpones the problem, if your containers make many connections. The com.docker.slirp continues to accumulate connections over time, even if all containers have been stopped. Most of the connections are in CLOSE_WAIT state, from what I can see.

@ghost

This comment has been minimized.

Copy link

commented Apr 25, 2017

for me this quick fix was fine: remove all of the unused/stopped containers:

  • docker rm $(docker ps -a -q)

The error occurred after not cleaning up after executing a bunch of docker builds and then iteratively running those builds, and ctrl-c out of the running container. I probably had 900+ containers just laying around without having been docker rm

@ajkerr

This comment has been minimized.

Copy link

commented May 2, 2017

@jmunsch The issue continued to occur for me, even after removing all running and stopped containers. Only a restart of the VM actually clears out the open connections on the Mac.

@bdharrington7

This comment has been minimized.

Copy link

commented Jul 6, 2017

I think I'm also observing this on my machine

macOS 10.12.3
Docker:

Version 17.06.0-ce-mac18 (18433)
Channel: stable
d9b66511e0

I'm running an npm local server that has to fetch metadata from npm, and can't actually fetch the entire skimdb without halting (making it through about 30%)

Here's my dockerfile if it helps to repro:

FROM node:6
WORKDIR /app
RUN npm install -g local-npm
CMD ["local-npm"]

Docker compose:

version: "3"
services:
  server:
    build: .
    ports:
      - "5080:5080"
    volumes:
      - npm:/app
    networks:
      - npm
volumes:
  npm:
networks:
  npm:

run docker-compose up and log messages will begin printing out about the metadata download progress. After about 30% it will just stop, and I can't start the container again with a subsequent docker-compose down and docker-compose up

@moshemarciano

This comment has been minimized.

Copy link

commented Aug 20, 2017

Same problem here, macOS 10.12.6 and docker for mac Version 17.06.0-ce-mac19 (18663)
Channel: stable c98c1c25e0

running 4-5 containers perfectly and networking just stops after a few days of very low (servers) activity. nothing but restart of the client helps.

@DevJackSmith

This comment has been minimized.

Copy link

commented Aug 23, 2017

Same issue here on macOS 10.12.6 and docker Version 17.06.1-ce-mac24 (18950).
I have 3 containers: apache, mysql, redis.
The issue for me appears within 5 minutes of running my daemons which enqueue and dequeue messages from a third party queue. (AWS SQS).

It's very annoying since I can't test my stuff for more than 5 minutes at a time, and then i have to shut down all containers, and then wait for docker to restart, and then restart all containers, just to test for 5 more minutes. Can we add "osx/10.12.x" label to this issue and maybe get a timeframe on fix please?

@kaz3t

This comment has been minimized.

Copy link

commented Aug 29, 2017

Same issue here on macOS 10.12.5 and docker 17.06.1-ce-mac24 (18950). I can't run functional tests that last ~ 30 minutes because after 5-10 minutes under load containers become unreachable through network.

@djs55

This comment has been minimized.

Copy link
Contributor

commented Aug 30, 2017

@bdharrington7 thanks for including repro steps. I managed to reproduce it once on 17.06 but not in subsequent runs. I've not managed to reproduce it on edge 17.07 yet. The one time I reproduced it I saw around ~50 file handles in use on the Mac but vpnkit claimed 2000 were in use, suggesting a connection leak somewhere.

It might be worth trying the edge 17.07 version since that vpnkit version contains a fix for one connection leak bug.

@ivanpricewaycom

This comment has been minimized.

Copy link

commented Sep 18, 2017

I followed the advice of @djs55: increasing to 2000 bought me some time, i was then forced to increase to 4000. i imagine in not too long i'll need to increase again or restart docker for mac.

for info no VM/docker restart was necessary for me so see the benefit.

if it is indeed a connection limit i don't need to do much to get to the first (500) limit, about 5 containers running some standard webapp / db tasks.

happy to provide further debug if someone wants any specific info/output.

-i

@adamtheturtle

This comment has been minimized.

Copy link

commented Nov 20, 2017

I wonder if this is related to docker/docker-py#1293.

@stromnet

This comment has been minimized.

Copy link

commented Jan 30, 2018

Running with 17.12.0-ce-mac49 (stable f50f37646d) I just noticed that I cannot start more containers:

ERROR: for f152bc944689_f152bc944689_f152bc944689_mycomponent-e02  
Cannot start service e02: driver failed programming external connectivity on endpoint mycomponent-e02 
(2692f831520ad7c22f1825d8efbf5c8a58bd99261228393c4f37aa83e63cacff): 
Error starting userland proxy: Bind for 0.0.0.0:38003: unexpected error Host.Sockets.Too_many_connections

Checking with lsof -i -P -n like above, I have 1985 of these:

vpnkit    19670 johan 2007u  IPv4 0xe25cf5d1952cae29      0t0  TCP 192.168.1.123:52115->10.10.10.10:1234 (CLOSED)
vpnkit    19670 johan 2008u  IPv4 0xe25cf5d19533a531      0t0  TCP 192.168.1.123:52116->10.10.10.10:1234 (CLOSED)
vpnkit    19670 johan 2009u  IPv4 0xe25cf5d19533ae29      0t0  TCP 192.168.1.123:52117->10.10.10.10:1234 (CLOSED)
vpnkit    19670 johan 2010u  IPv4 0xe25cf5d19533b721      0t0  TCP 192.168.1.123:52118->10.10.10.10:1234 (CLOSED)

where 192.168.1.123 would be my host machines IP, and 10.10.10.10. is an external server.
I previously had a docker container which created thousands of connections towards that server (partially bug, but that is out of context). That container is now dead.

Looks like some bug in vpnkit not cleaning up properly?
Stopping Docker releases all connections. After starting it, I can start containers as usual.

@docker-desktop-robot

This comment has been minimized.

Copy link
Collaborator

commented May 9, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale comment.
Stale issues will be closed after an additional 30d of inactivity.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle stale

@STRML

This comment has been minimized.

Copy link

commented May 30, 2018

FYI, ~/Library/Containers/com.docker.docker/Data/database/ as described in this comment does not exist anymore. You need to edit ~/Library/Group\ Containers/group.com.docker/settings.json and add:

"vpnKitMaxConnections" : 4000,

The default appears to be 2000. I am seeing a very large number of open connections in CLOSE_WAIT status that appear to be created by HAProxy health checks.

@djs55

This comment has been minimized.

Copy link
Contributor

commented May 31, 2018

@STRML thanks for writing about settings.json file -- hopefully this is the long-term way we'll configure parameters like this.

@stevecheckoway could you share some repro instructions? I've not been able to reproduce this with recent builds.

The original complaint about DNS should hopefully be fixed because the method of DNS resolution has completely changed on the Mac -- we used to forward requests to upstream servers ourselves (which was subject to the file descriptor limit) but now we use a macOS library (which avoids the limit). There may be problems elsewhere though.

@STRML

This comment has been minimized.

Copy link

commented May 31, 2018

I can reproduce it very easily on edge (18.05.0-ce-mac66) by hosting a HAProxy container that health checks a few services outside the docker vm every second. I run out of ports within an hour.

@djs55

This comment has been minimized.

Copy link
Contributor

commented May 31, 2018

@STRML could you point me at a Dockerfile or docker-compose.yml that demonstrates the problem? Are these health checks TCP connects or HTTP transactions?

@stevecheckoway

This comment has been minimized.

Copy link

commented May 31, 2018

@djs55 I think you may be right. DNS resolution seems fine, but connecting sockets seems to fail.

I don't have a simple reproduction example. I've been using https://github.com/opennms-forge/docker-horizon-core-web set to monitor a few hosts. After a few days, the hosts are no longer reachable and I have to restart Docker itself. Connecting to the container with docker exec and trying to make connections fails.

As a sanity test, I just set nc to listen on a port nc -k -l 1234 -n on a remote machine and then from inside a Docker container, I wrote a bash loop to connect to that server, send a line of output, and close the socket. 150,000 connections later, it is still working fine.

Here's my test loop.

i=0
while exec 3>/dev/tcp/192.168.1.147/1234; do
  echo "$i"
  echo "$i" >&3
  exec 3>&-
  i=$((i+1))
done

So the issue is unlikely to be just creating too many sockets. Next time the network starts failing, I'll try to debug more.

@djs55

This comment has been minimized.

Copy link
Contributor

commented May 31, 2018

@stevecheckoway thanks for the update. I've also tried to reproduce a failure along those lines but with no success so far :( There's clearly something a bit different about the workloads that trigger the problem... perhaps there's an issue with the transparent HTTP proxy used if the connections are to port 80 or 443?

@STRML

This comment has been minimized.

Copy link

commented May 31, 2018

@djs55 Working reproduction at https://github.com/STRML/docker-for-mac-socket-issue.

I see the open sockets increase at 5 per second, and they are never reaped. Only fix is to eventually restart the VM.

@djs55

This comment has been minimized.

Copy link
Contributor

commented May 31, 2018

@STRML thanks for the repro. I'm seeing some strange behaviour -- at first it was leaking at about 5 per second as you report but it seems to have flattened out at about 80. I'll leave it running to see what happens.

Looking at netstat -an | grep 8000 I have one LISTEN socket, 12 ESTABLISHED connections and 65 in the SYN_SENT state. Looking at tcpdump I'm seeing lots of SYN and then RST ACK responses. Anyway, I'll need to investigate this a bit more -- are the connections to the python server supposed to be kept open indefinitely? It looks like the python server has hit a scalability limit and the host kernel is rejecting the new connection attempts.

@STRML

This comment has been minimized.

Copy link

commented May 31, 2018

Ah yes, I see the same top out at about 80, seems to be the python server. Replacing it a NodeJS listener causes it to continually increase as expected.

I've updated the repo.

@skatsubo

This comment has been minimized.

Copy link

commented Jun 5, 2018

This issue affects our integration tests as well: connections in CLOSED state stays forever - until docker restart. Can successfully reproduce it using https://github.com/STRML/docker-for-mac-socket-issue.
So far restart of docker service is the only known workaround, right?

@djs55

This comment has been minimized.

Copy link
Contributor

commented Jun 6, 2018

@STRML thanks for the repo update. It seems to reproduce for me -- I let it climb to over 300 and then I killed the run.sh which shutdown the containers. The file descriptors are still open. I'll investigate more.

djs55 added a commit to djs55/vpnkit that referenced this issue Jun 6, 2018

Close partially-established TCP connections
If a client sends SYN, we connect the external socket and reply with
SYN ACK. If the client responds with RST ACK then previously we would
leak the connection.

This patch extends the existing mechanism which closes connections when
switch ports are timed-out, adding a connection close when such an
"early reset" is encountered. Once the connection has been established
we assume we can use the existing closing mechanism: a client sending
a RST should cause the TCP/IP stack to close our flow.

Related to [docker/for-mac#1132]

Signed-off-by: David Scott <dave.scott@docker.com>

djs55 added a commit to djs55/vpnkit that referenced this issue Jun 6, 2018

Close partially-established TCP connections
If a client sends SYN, we connect the external socket and reply with
SYN ACK. If the client responds with RST ACK then previously we would
leak the connection.

This patch extends the existing mechanism which closes connections when
switch ports are timed-out, adding a connection close when such an
"early reset" is encountered. Once the connection has been established
we assume we can use the existing closing mechanism: a client sending
a RST should cause the TCP/IP stack to close our flow.

Related to [docker/for-mac#1132]

Signed-off-by: David Scott <dave.scott@docker.com>

djs55 added a commit to djs55/vpnkit that referenced this issue Jun 6, 2018

Close partially-established TCP connections
If a client sends SYN, we connect the external socket and reply with
SYN ACK. If the client responds with RST ACK then previously we would
leak the connection.

This patch extends the existing mechanism which closes connections when
switch ports are timed-out, adding a connection close when such an
"early reset" is encountered. Once the connection has been established
we assume we can use the existing closing mechanism: a client sending
a RST should cause the TCP/IP stack to close our flow.

Related to [docker/for-mac#1132]

Signed-off-by: David Scott <dave.scott@docker.com>

djs55 added a commit to djs55/vpnkit that referenced this issue Jun 6, 2018

Close partially-established TCP connections
If a client sends SYN, we connect the external socket and reply with
SYN ACK. If the client responds with RST ACK then previously we would
leak the connection.

This patch refactors the connection closing mechanism, creating an
idempotent `close_flow` function which is called

- on normal close when the proxy receives `FIN` etc
- on a reset, including during the handshake
- when a switch port is being timed-out.

This replaces the previous `on_destroy` promise which was used in
`Lwt.pick` since closing the connection should cause the proxy to receive
EOF.

Related to [docker/for-mac#1132]

Signed-off-by: David Scott <dave.scott@docker.com>
@djs55

This comment has been minimized.

Copy link
Contributor

commented Jun 11, 2018

I believe I have a fix for the problem. The TCP keepalive used by haproxy looks like this:

  • haproxy -> host: SYN
  • haproxy <- host: SYN ACK
  • haproxy -> host: RST ACK

vpnkit calls connect when it receives the initial SYN (to know whether to reply with SYN ACK or RST ACK itself). Unfortunately vpnkit only calls close in a "try finally" clause attached to the data transfer part of the connection, which never starts in this case. The fix was to make vpnkit close the socket in all cases when it receives a valid RST. This also explains why switching to HTTP keepalives worked around the issue, as the initial handshake would be completed, data would flow and then the connection would be torn down (which is normal behaviour from vpnkit's perspective)

If you'd like to try the proposed fix for yourself, you can try the latest development build from:
https://download-stage.docker.com/mac/bysha1/1d148ea96ac8f5e72fd28aaebb09d36a8f1408c2/Docker.dmg

This is not yet released and may be buggy; therefore please don't use it for production :-)

If you get a chance to try this, let me know how you get on!

Thanks again for the reports and the repro case.

@ceo

This comment has been minimized.

Copy link

commented Jun 28, 2018

Thanks @djs55 ! works like a charm!

at first it started accumulating some CLOSED but now i check and cant see more than 20 .

have a great day!

@djs55

This comment has been minimized.

Copy link
Contributor

commented Jun 29, 2018

@ceo thanks for letting me know -- glad it worked for you!

@STRML

This comment has been minimized.

Copy link

commented Jul 19, 2018

FYI this is released on edge @ 18.06.0-ce-rc3-mac68, although it is erroneously labeled as having something to do with HAProxy, when it really is:

If a client sends SYN, we connect the external socket and reply with SYN ACK. If the client responds with RST ACK then previously we would leak the connection.

(moby/vpnkit#392)

HAProxy just happens to one popular program that does this.

@ernestojpg

This comment has been minimized.

Copy link

commented Aug 3, 2018

Hello @djs55 !

I'm having the same problem but with CentOS 7.5 and Nginx. I have updated to latest Docker version 18.06.0-ce, build 0ffa825, but the problem is still there .... :(

Is your fix only applicable to Mac?

Update: Sorry, it seems I did not test it properly. The issue also seems to be resolved for me with version 18.06.0-ce in CentOS and Nginx.

@Vanuan

This comment has been minimized.

Copy link

commented Aug 4, 2018

@ernestojpg This issue is about docker for mac (and probably for windows). It doesn't look like fixes in vpnkit are going to affect linux platform since there's no vpnkit deployed on linux. The issue you are having must be something else.

@docker-desktop-robot

This comment has been minimized.

Copy link
Collaborator

commented Nov 2, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale comment.
Stale issues will be closed after an additional 30d of inactivity.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle stale

@djs55

This comment has been minimized.

Copy link
Contributor

commented Nov 2, 2018

The fix that was in edge 18.06.0-ce-rc3 should also be in the current stable release 18.06.0. I'll close this ticket since I believe the issue is fixed everywhere. If something else goes wrong, please open a fresh ticket.

Thanks for your report!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.