Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
x/net/http2: high RAM usage after closing response bodies early with HTTP2 connections #20448
The HTTP/2 client appears to use a large amount of memory when the user opens several serial requests to the same server, reading a small amount from each very large response body, and then closing the body.
This was discovered in GoogleCloudPlatform/gcsfuse#227. gcsfuse is a fuse file system for Google Cloud Storage. Because the file system doesn't know how much of the GCS object is going to be read, when it detects what appears to be a sequential read through a file handle, it opens an HTTP request to GCS for the entire object, consuming what the client reads sequentially and then cancelling the request by closing the response body. The user in that issue noticed that making 500 such request serially causes gcsfuse to use 2 GiB of memory, even after closing all response bodies and performing garbage collection.
While on an airplane without Wi-Fi I managed to reproduce this in a self-contained program. The program starts a server with a handler that returns 1 GiB of data. Then it repeatedly makes GET requests where the client reads 10 MiB and then closes the body. The client optionally sleeps between reads from the response body, simulating a reasonable bandwidth internet connection. Afterward the program dumps a memory profile.
You can run it like this, using a little utility I wrote to easily see peak memory usage:
Here are the max RSS results I get:
450 MiB is insane, given that only one request is in flight at a time. I see that
According to the memory profile, pretty much all of that 450 MiB is in the pipe buffer written to by the client read loop. Note that the high usage persists even if I call
I don't think the client is doing anything against the rules according to the package documentation (but please correct me if I'm wrong). So memory usage this high seems like a bug.
(By the way, the bandwidth achieved when using HTTP/2 is also generally lower than HTTP/1, and is much more variable. Should I file an issue for the poor performance too?)
referenced this issue
May 22, 2017
At a glance, it sounds like the pipe buffer might simply not have the back-pressure that it needs, so the client side of the connection is filling it up with much of the 1GB response even though only 10MB is being drained. I wouldn't guess that this is a compiler/runtime issue unless there's evidence that the HTTP/2 client implementation shouldn't be retaining that memory.
I haven't had a chance to try running reproduce.go, but I believe the problem is that we keep writing to cs.bufPipe after the response body is closed if the server has more data to send. Note that we never set cs.bufPipe.b to nil, and closing a response body fails future Reads but not future Writes. I bet this problem wouldn't exist if the request was canceled rather than closed (not that I suggest doing that, this is clearly a bug).
changed the title from
/x/net/http2: high RAM usage after closing response bodies early with HTTP2 connections
x/net/http2: high RAM usage after closing response bodies early with HTTP2 connections
May 22, 2017
@jacobsa We have a partial fix (see above). However, note that each HTTP/2 connection has a 1GB flow-control window by default, and currently there isn't a knob to adjust this value.
If the above fix is not good enough for you and you want a separate knob, could you file another bug? Thanks!
pushed a commit
May 22, 2017
@tombergan: thanks. This is 4 MiB per stream by default though, right? I have thought in the past that it ought to be tunable (it seems way too high for low-latency/bandwidth connections, and too low for high-latency/bandwidth connections), but I've never had a good way to choose a value myself so never filed an issue. If I come up with something better, I will file.