-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
http3: streams leaked in outgoingStreamsMap #4513
Comments
Thank you for investigating @GeorgeMac! This is likely related to the changes we introduced in v0.43, but the question is if this is happening in the I tried reproducing the behavior: https://gist.github.com/marten-seemann/33db22a3f7f7d957803ca1d574bfeae7. Everything seems to work here, I don't see a lot of memory in the outgoing streams map, and regularly logging the size of the map confirms that there are never more than a few streams in this map.
The stream tracks its transitions through the QUIC state machine, and calls the Lines 365 to 388 in 93c4785
Any idea what why the leak doesn't show up in my example? |
Hey @marten-seemann Thanks for the speedy reply! Im doing a bit more digging this morning. My particular example has me using both the I misinterpretted the pprof profiles a bit there. It seems the
I am going to see if I can get your reproduction to demonstrate this. |
Here is some more context:
|
Still digging, but my latest theory is that these streams are being held hostage by an uncancelled context via newStateTrackingStream:
Update: Not convinced of this anymore. I added some atomic counts around creating and canceling these and they match up perfectly. |
I have a very similar issue since 0.43.0. Large amount of streams accumulate in memory without being freed:
|
Looks pretty identical to my observations in pprof 👍 |
@marten-seemann @GeorgeMac yup I can confirm #4523 fixes it |
I was just stress testing our little reverse proxy built on quic-go and found that memory increased linearly with requests.
I was performing around 50 rps, all of which were responding OK/200.
A bit of profiling has shown that old stream nums and their relevant streams are accumulating here:
Here I was running around 50 rps at an instance of our project which is proxying requests from one Go stdlib net/http server over a http3.RoundTripper.
The dip and plateau is where I stopped sending requests. The memory is just being held in the streams map now.
Here is screenshot of a heap profile where we can see it accumulating in the map.
Looking at the relevant code, I have yet to see anything clear streams from the StreamMap types.
I tried calling DeleteStream explicitly once
isDone
(in the code here) was true. But that led to errors on one side of the connection. So I figured I best open this back up to you guys for your thoughts.Cheers!
The text was updated successfully, but these errors were encountered: