-
Notifications
You must be signed in to change notification settings - Fork 18.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
user land proxy uses all RAM memory when exposing a big range of ports #11185
Comments
@brahmaroutu perhaps you have thoughts on this issue, since you implemented this in #9097 Also, (possibly) similar, but for |
@thaJeztah I tried this test and I get the failure But all the ports are mapped when I do an inspect. Looks like it is a timing or memory related issue in the Userland proxy. |
Hmf, really hope we can get rid of that once (the proxy). @brahmaroutu do you think additional info is required on this? Also, perhaps you are able to answer this;
|
@thaJeztah I agree with @Jaykah that only way to avoid running many parallel jobs to allocate all these ports is to block allocate. Not sure refactoring that code, when Docker network drivers are changing, would be a good idea. |
@Jaykah Can you check how many docker proxy instances are started? I wouldn't be surprised if the root cause was the execution of too many docker proxy processes to handle all of these port mappings. |
@unclejack Hundreds of them, http://prntscr.com/6dj6uu |
@Jaykah Thanks! @crosbymichael @LK4D4 The user land proxy needs to be removed or at least not enabled for port ranges. |
@crosbymichael @LK4D4 @unclejack Hi guys, I understand that the solution to this may take a while, so I was wondering if you could suggest a (temporary) workaround to this issue. If there's a way to disable the proxy that seems like the way to go - unfortunately I'm not familiar enough with Go to do it myself. Thanks! |
There's actually some preparations in place to facilitate that (remove the proxy), see #11208 (comment), so there's hope :) |
@Jaykah I'm pretty sure that there will be solution in 1.6. For now if you know how to build docker you can replace body of this https://github.com/docker/docker/blob/master/daemon/networkdriver/portmapper/proxy.go#L115 function with return nil. But this will break inter-container communication :( |
Thanks guys! @LK4D4 so I assume I have to change that to:
Is that correct? Also, how soon do you think 1.6 will be released? |
It will be beginning of April I think |
Same issue here when trying to run https://github.com/QubitProducts/bamboo and listen on 10000-20000. |
@MiLk It spawns whole docker binary on each port :) |
This might be good to make only 1 docker-proxy per container (if it is publishing ports) instead of per-port. |
@cpuguy83 I disagree, the only acceptable and proper solution is the complete removal (disabled and deleted from the code) of the user land proxy. edit: Just to be clear, this has already been in the code once, but older distributions like RHEL6/CentOS6 weren't working with that configuration because of the old 2.6.32 kernel. |
@unclejack We keep talking about that... |
Hey guys. I saw the release of 1.6, any changes to user land proxy in that version? |
@Jaykah unfortunately, not yet |
Hey guys. Then if I kill the hundreds of docker-proxy processes on my docker daemon machine, will it affect my containers' running? In addition, I will never use 0.0.0.0:port -> container_ip:port to access container's application. |
Any news to when the userland docker-proxy will be removed? |
@oblop It's optionally disabled on master, will be part of 1.7 |
I also face this problem now. I need to publish a large range of port and it will start a lot of docker-proxy instances. Can anyone tell me how to solve it? I can not use the --net=host because the ssh will not work and some command like passwd will failed. |
@darknigh In 1.7 you will be available to disable it with |
Did this make it into 1.7? I'm getting "flag not found" when I run with --userland-proxy=false on 1.7. |
@maxhawkins yes, it's in 1.7. See: https://github.com/docker/docker/blob/v1.7.0/daemon/config_linux.go#L75. It is an daemon option, so should be provided when starting the docker daemon. |
What is the recommended workaround when the host is MacOs because https://docs.docker.com/network/host/
I have a container that on MacOs host that I need to expose a large range of ports on and seeing no viable option |
@djs55 ^^ do you know a workaround for Docker Desktop? |
For development use-cases where it's only necessary to be able to connect to the ports from the host, then you could try enabling the experimental SOCKS proxy. This will allow you to connect to the container backend IPs directly. |
Is this still being worked on? |
For now -userland-proxy=false should it be turned off at Docker /daemon.json? Any better advice |
See: moby/moby#11185 Users of the Portainer frontend are publishing all exposed ports by default / accident. This is causing lockups and crashes on their devices. Since unexposed ports can still be published, and use of the internal TURN server is niche this change will save more pain than it will cause. This should be documented in a FAQ with the myriad issues Portainer can cause.
See: moby/moby#11185 Users of the Portainer frontend are publishing all exposed ports by default / accident. This is causing lockups and crashes on their devices. Since unexposed ports can still be published, and use of the internal TURN server is niche this change will save more pain than it will cause. This should be documented in a FAQ with the myriad issues Portainer can cause.
still wondering if there's any solution to this planned to be implemented in Docker. i assume that Podman or some other implementation might be more efficient in this regard, but i did not test myself yet. i will post here if i do. meanwhile, i'd really appreciate if anyone shares some information about such a prospect. thanks. |
I use network=host. Not a good solution, but there's no other way. |
does this mean docker-proxy will be removed? I still can see the docker-proxy process work in the background. |
Yes, when using $ docker run -d -p 80 --name foo nginx:alpine
$ ps aux | grep '[-]proxy'
root 9556 0.0 0.3 1152904 3936 ? Sl 09:43 0:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 49154 -container-ip 172.17.0.2 -container-port 80
root 9561 0.0 0.3 1152904 3900 ? Sl 09:43 0:00 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 49154 -container-ip 172.17.0.2 -container-port 80
$ docker rm -fv foo
$ docker run -d --network=host --name foo nginx:alpine
$ ps aux | grep '[-]proxy'
# (no results) |
With the summary below in mind, is there any reason to keep this issue open? 8 years and 84 comments contains a fair amount of information that is no longer relevant today. If there is any remaining action to be taken, perhaps a fresh issue is better for that?
From: #11185 (comment)
Kernel bug doesn't seem to be reproducible anymore AFAIK and has likely been resolved since. Summary of comment history
|
We actually do plan to fix this and finally have people willing to review network changes, so... soon! /cc @akerouanton |
See: moby/moby#11185 Users of the Portainer frontend are publishing all exposed ports by default / accident. This is causing lockups and crashes on their devices. Since unexposed ports can still be published, and use of the internal TURN server is niche this change will save more pain than it will cause. This should be documented in a FAQ with the myriad issues Portainer can cause.
I have a dockerized SIP application that requires a large number of ports for RTP. Since it recently became possible to expose a port range, I decided to move away from --net=host scenario to using those ranges (-p 30000-40000:30000-40000/udp)
However, when a range is large enough, docker eats up all RAM and fails:
It would seem logical to combine those ranges when applied to iptables into something like:
--dports 30000:40000, and afaik we do not need to explicitly specify the new destination ports if they're going to match the original ones. Or am I missing something?
The text was updated successfully, but these errors were encountered: