Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

user land proxy uses all RAM memory when exposing a big range of ports #11185

Open
Jaykah opened this issue Mar 5, 2015 · 86 comments
Open

user land proxy uses all RAM memory when exposing a big range of ports #11185

Jaykah opened this issue Mar 5, 2015 · 86 comments

Comments

@Jaykah
Copy link

Jaykah commented Mar 5, 2015

I have a dockerized SIP application that requires a large number of ports for RTP. Since it recently became possible to expose a port range, I decided to move away from --net=host scenario to using those ranges (-p 30000-40000:30000-40000/udp)

However, when a range is large enough, docker eats up all RAM and fails:

ERRO[0230] Handler for POST /containers/{name:.*}/start returned error: Cannot start container 80df70ab22d94408e9a5a2c60590b1b1281e5a59b5531590738739c9f7c7c485: iptables failed: iptables --wait -t nat -A DOCKER -p udp -d 0/0 --dport 38207 ! -i docker0 -j DNAT --to-destination 172.17.0.3:38207:  (fork/exec /sbin/iptables: cannot allocate memory)
ERRO[0230] HTTP Error: statusCode=500 Cannot start container 80df70ab22d94408e9a5a2c60590b1b1281e5a59b5531590738739c9f7c7c485: iptables failed: iptables --wait -t nat -A DOCKER -p udp -d 0/0 --dport 38207 ! -i docker0 -j DNAT --to-destination 172.17.0.3:38207:  (fork/exec /sbin/iptables: cannot allocate memory)

It would seem logical to combine those ranges when applied to iptables into something like:
--dports 30000:40000, and afaik we do not need to explicitly specify the new destination ports if they're going to match the original ones. Or am I missing something?

@thaJeztah
Copy link
Member

@brahmaroutu perhaps you have thoughts on this issue, since you implemented this in #9097

Also, (possibly) similar, but for EXPOSE; #9021

@brahmaroutu
Copy link
Contributor

@thaJeztah I tried this test and I get the failure
Error response from daemon: Cannot start container 83bc8a32f7a588903a80ca5dfa6c0d699501ff542c7ad8cf365dd95543ffe4a6: Error starting userland proxy:

But all the ports are mapped when I do an inspect. Looks like it is a timing or memory related issue in the Userland proxy.

@thaJeztah
Copy link
Member

Looks like it is a timing or memory related issue in the Userland proxy

Hmf, really hope we can get rid of that once (the proxy).

@brahmaroutu do you think additional info is required on this? Also, perhaps you are able to answer this;

It would seem logical to combine those ranges when applied to iptables into something like:
--dports 30000:40000, and afaik we do not need to explicitly specify the new destination ports if they're going to match the original ones. Or am I missing something?

@brahmaroutu
Copy link
Contributor

@thaJeztah I agree with @Jaykah that only way to avoid running many parallel jobs to allocate all these ports is to block allocate. Not sure refactoring that code, when Docker network drivers are changing, would be a good idea.
The documentation intended to say that if you specify -p 1-10:100-110 then both the ranges should have 10 ports to match. I did not default the case where -p 1-10 would imply -p 1-10:1-10

@unclejack
Copy link
Contributor

@Jaykah Can you check how many docker proxy instances are started? I wouldn't be surprised if the root cause was the execution of too many docker proxy processes to handle all of these port mappings.

@Jaykah
Copy link
Author

Jaykah commented Mar 6, 2015

@unclejack Hundreds of them, http://prntscr.com/6dj6uu

@unclejack
Copy link
Contributor

@Jaykah Thanks!

@crosbymichael @LK4D4 The user land proxy needs to be removed or at least not enabled for port ranges.

@unclejack unclejack changed the title Docker eats up 100% RAM when exposing a big range of ports user land proxy uses all RAM memory when exposing a big range of ports Mar 6, 2015
@Jaykah
Copy link
Author

Jaykah commented Mar 11, 2015

@crosbymichael @LK4D4 @unclejack Hi guys, I understand that the solution to this may take a while, so I was wondering if you could suggest a (temporary) workaround to this issue. If there's a way to disable the proxy that seems like the way to go - unfortunately I'm not familiar enough with Go to do it myself.

Thanks!

@thaJeztah
Copy link
Member

There's actually some preparations in place to facilitate that (remove the proxy), see #11208 (comment), so there's hope :)

@LK4D4
Copy link
Contributor

LK4D4 commented Mar 11, 2015

@Jaykah I'm pretty sure that there will be solution in 1.6. For now if you know how to build docker you can replace body of this https://github.com/docker/docker/blob/master/daemon/networkdriver/portmapper/proxy.go#L115 function with return nil. But this will break inter-container communication :(

@Jaykah
Copy link
Author

Jaykah commented Mar 11, 2015

Thanks guys!

@LK4D4 so I assume I have to change that to:

func (p *proxyCommand) Start() error {
 return nil;
 r, w, err := os.Pipe()
 if err != nil {
  return fmt.Errorf("proxy unable to open os.Pipe %s", err)
 }

Is that correct?

Also, how soon do you think 1.6 will be released?

@LK4D4
Copy link
Contributor

LK4D4 commented Mar 11, 2015

@Jaykah

func (p *proxyCommand) Start() error {
    return nil
}

It will be beginning of April I think

@MiLk
Copy link

MiLk commented Apr 3, 2015

Same issue here when trying to run https://github.com/QubitProducts/bamboo and listen on 10000-20000.
I had to use --net=host.

@LK4D4
Copy link
Contributor

LK4D4 commented Apr 3, 2015

@MiLk It spawns whole docker binary on each port :)

@cpuguy83
Copy link
Member

cpuguy83 commented Apr 3, 2015

This might be good to make only 1 docker-proxy per container (if it is publishing ports) instead of per-port.

@unclejack
Copy link
Contributor

@cpuguy83 I disagree, the only acceptable and proper solution is the complete removal (disabled and deleted from the code) of the user land proxy.

edit: Just to be clear, this has already been in the code once, but older distributions like RHEL6/CentOS6 weren't working with that configuration because of the old 2.6.32 kernel.

@cpuguy83
Copy link
Member

cpuguy83 commented Apr 3, 2015

@unclejack We keep talking about that...

@Jaykah
Copy link
Author

Jaykah commented Apr 16, 2015

Hey guys. I saw the release of 1.6, any changes to user land proxy in that version?

@thaJeztah
Copy link
Member

@Jaykah unfortunately, not yet

@allencloud
Copy link
Contributor

Hey guys.

Then if I kill the hundreds of docker-proxy processes on my docker daemon machine, will it affect my containers' running?

In addition, I will never use 0.0.0.0:port -> container_ip:port to access container's application.

@LK4D4 @unclejack @thaJeztah

@blop
Copy link

blop commented May 28, 2015

Any news to when the userland docker-proxy will be removed?
No milestone yet for this?
Thanks

@cpuguy83
Copy link
Member

@oblop It's optionally disabled on master, will be part of 1.7

@ghost
Copy link

ghost commented Jun 17, 2015

I also face this problem now. I need to publish a large range of port and it will start a lot of docker-proxy instances. Can anyone tell me how to solve it? I can not use the --net=host because the ssh will not work and some command like passwd will failed.

@LK4D4
Copy link
Contributor

LK4D4 commented Jun 17, 2015

@darknigh In 1.7 you will be available to disable it with --userland-proxy=false.

@maxhawkins
Copy link

Did this make it into 1.7? I'm getting "flag not found" when I run with --userland-proxy=false on 1.7.

@thaJeztah
Copy link
Member

Did this make it into 1.7? I'm getting "flag not found" when I run with --userland-proxy=false on 1.7.

@maxhawkins yes, it's in 1.7. See: https://github.com/docker/docker/blob/v1.7.0/daemon/config_linux.go#L75. It is an daemon option, so should be provided when starting the docker daemon.

@kevzettler
Copy link

What is the recommended workaround when the host is MacOs because --net=host is not viable per

https://docs.docker.com/network/host/

The host networking driver only works on Linux hosts, and is not supported on Docker for Mac, >Docker for Windows, or Docker EE for Windows Server.

I have a container that on MacOs host that I need to expose a large range of ports on and seeing no viable option

@thaJeztah
Copy link
Member

@djs55 ^^ do you know a workaround for Docker Desktop?

@djs55
Copy link

djs55 commented Nov 15, 2018

For development use-cases where it's only necessary to be able to connect to the ports from the host, then you could try enabling the experimental SOCKS proxy. This will allow you to connect to the container backend IPs directly.

@csvan
Copy link

csvan commented Feb 6, 2020

Is this still being worked on?

@ghost
Copy link

ghost commented Jul 8, 2020

Did this make it into 1.7? I'm getting "flag not found" when I run with --userland-proxy=false on 1.7.

@maxhawkins yes, it's in 1.7. See: https://github.com/docker/docker/blob/v1.7.0/daemon/config_linux.go#L75. It is an daemon option, so should be provided when starting the docker daemon.

For now -userland-proxy=false should it be turned off at Docker /daemon.json? Any better advice

felddy added a commit to felddy/foundryvtt-docker that referenced this issue Oct 29, 2020
See: moby/moby#11185

Users of the Portainer frontend are publishing all exposed ports by 
default / accident.  This is causing lockups and crashes on their 
devices.  Since unexposed ports can still be published, and use of the 
internal TURN server is niche this change will save more pain than it 
will cause.  

This should be documented in a FAQ with the myriad issues Portainer can 
cause.
felddy added a commit to felddy/foundryvtt-docker that referenced this issue Oct 29, 2020
See: moby/moby#11185

Users of the Portainer frontend are publishing all exposed ports by 
default / accident.  This is causing lockups and crashes on their 
devices.  Since unexposed ports can still be published, and use of the 
internal TURN server is niche this change will save more pain than it 
will cause.  

This should be documented in a FAQ with the myriad issues Portainer can 
cause.
@Va1
Copy link

Va1 commented May 11, 2021

still wondering if there's any solution to this planned to be implemented in Docker.

i assume that Podman or some other implementation might be more efficient in this regard, but i did not test myself yet. i will post here if i do. meanwhile, i'd really appreciate if anyone shares some information about such a prospect. thanks.

@raarts
Copy link

raarts commented May 11, 2021

I use network=host. Not a good solution, but there's no other way.

@winkee01
Copy link

does this mean docker-proxy will be removed? I still can see the docker-proxy process work in the background.

@thaJeztah
Copy link
Member

does this mean docker-proxy will be removed? I still can see the docker-proxy process work in the background.

Yes, when using --network=host, no docker-proxy process will be started for that container; here's an example that first creates a container without --network-host, which starts a proxy for the published ports (one for IPv4, one for IPv6) then removes the container, and starts a container with --network=host, after which no proxy processes are present:

$ docker run -d -p 80 --name foo nginx:alpine

$ ps aux | grep '[-]proxy'
root      9556  0.0  0.3 1152904 3936 ?        Sl   09:43   0:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 49154 -container-ip 172.17.0.2 -container-port 80
root      9561  0.0  0.3 1152904 3900 ?        Sl   09:43   0:00 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 49154 -container-ip 172.17.0.2 -container-port 80

$ docker rm -fv foo
$ docker run -d --network=host --name foo nginx:alpine
$ ps aux | grep '[-]proxy'
# (no results)

@polarathene
Copy link
Contributor

polarathene commented May 9, 2023

With the summary below in mind, is there any reason to keep this issue open?

8 years and 84 comments contains a fair amount of information that is no longer relevant today. If there is any remaining action to be taken, perhaps a fresh issue is better for that?

  • userland-proxy isn't relevant to macOS / Windows Docker Desktop AFAIK (unclear if the memory issue is a problem there, but may warrant it's own issue if it is?)
  • On Linux hosts we can disable userland-proxy via daemon.json, or use host mode networking.
  • iptables rule generation regardless of userland-proxy setting is still a concern and affects time to provision a container, but is a separately tracked issue: When using userland-proxy=false many iptables entries instead of multiport #36214

From: #11185 (comment)

Kernel bug doesn't seem to be reproducible anymore AFAIK and has likely been resolved since.

Summary of comment history

  • Originally when reported, there was no supported way to disable the userland-proxy AFAIK. That arrived and was briefly defaulted to but became a problem due to unexpected faliures.
  • macOS / Windows Docker Desktop apps have been said to no longer use userland-proxy as they manage the network of the VM internally differently, that it's only a concern for Linux? (without the recently introduced Docker Desktop app linux platform support)
  • userland-proxy: false or host mode networking (--network host) on Linux hosts are said to avoid the issue.
  • Large port ranges (such as 10k range) are considered excessive and alternative network drivers have been advised instead where the issue should not be present.
  • A related concern is the iptables management of each individual port being said to be configured separately with advice to configure a range in iptables instead. I do not know if there is plans to pursue that, but that sounds like a separate issue to track (EDIT: Here it is: When using userland-proxy=false many iptables entries instead of multiport #36214 ).
    • This comment highlights that the iptables generation concern applies regardless of userland-proxy setting, with userland-proxy: true differing by creating proxy processes.
    • An example of iptables rules for port range is shown here.
    • This comment notes each individual port mapping opens a file descriptor and for 10k range may take over 10 minutes to provision. Back at this time, systems may have also had low limits configured (LimitNOFILE), which may have also caused problems if reached. Another limit for number of processes may have also been at risk of being hit?
  • Reproduction via CLI with output showing time to start container based on size of port range and noting a large enough port range appearing unresponsive. Some comments became focused on time to start a container creating the individual iptables port mappings rather than memory usage that this issue was focused on.
  • A release was mentioned in the discussion with a fix that drastically reduced memory usage for large port ranges.

@cpuguy83
Copy link
Member

cpuguy83 commented May 9, 2023

We actually do plan to fix this and finally have people willing to review network changes, so... soon!
Let's leave this open.

/cc @akerouanton

Sallenmoore pushed a commit to Sallenmoore/foundryvtt-docker that referenced this issue Jun 18, 2023
See: moby/moby#11185

Users of the Portainer frontend are publishing all exposed ports by 
default / accident.  This is causing lockups and crashes on their 
devices.  Since unexposed ports can still be published, and use of the 
internal TURN server is niche this change will save more pain than it 
will cause.  

This should be documented in a FAQ with the myriad issues Portainer can 
cause.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests