New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
crypto/tls: web server leaking memory at block.reserve #28654
Comments
@FiloSottile. I cannot find this code in the tip, can you look at this issue? |
@zemirco can you try to reproduce this on the tip version of go (this requires building Go from sources)? Or can you make the self-sufficient program that reproduces the problem so we can run it? |
Thank you for looking into this. I can only reproduce this issue in production. I haven't seen it on my local machine. That makes it hard to debug. You need a real domain with a valid certificate to make TLS work. Is there a Dockerfile which builds Go from scratch that I could use instead of the one from Docker Hub that I'm currently using? I upgraded Go inside Docker to use |
I assume you're just accumulating HTTP keep-alive servers and each is retaining some memory. How many file descriptors do you have open? Paste the output of And include output of running with environment variable Also, you write:
r := chi.NewRouter() Your first line of the "minimal version" includes some unspecified package. Do you have a complete standalone repro, ideally without using external packages? But if you need the external packages, can you at least specify their imports & versions? |
Here is the current graph showing the last 6 hours. Still seeing the linear growth. Here is the current
Output of
I'm using chi for routing https://github.com/go-chi/chi. The version from my
I'm also using https://github.com/felixge/httpsnoop to capture http related metrics. It's in the critical path as you can see in the SVG above. Here is the exact version.
Output with
|
The They are held by the |
I'm also seeing this apparent memory leak on a production server, which serves HTTP/2 on TLS directly to the Internet without being behind a proxy like nginx. It takes some time to get close to exhausting the 2GB of RAM its comtaining VM is allowed, but when that happens, top looks about like:
I tried the minimal example in the net/http docs and could not reproduce this. Its memory usage remained constant even after close to a million HTTP/2 requests, while my app goes OOM after a tiny fraction of that. Something else is going on. In my case I'm routing with gorilla/mux and using justinas/alice for middleware handling. I'm going to try to put together a minimal example to reproduce the problem but it will take some time. At this point I do suspect the trouble might not be in the standard library after all... |
It looks like you're also using the gorilla packages. I know there is a memory leak in https://github.com/gorilla/context when combining it with Maybe @elithrar could help? There is a discussion about releasing new versions for the gorilla packages. The maintainers don't want to break compatibility with existing code. This is a good thing. By now we have Go Modules which could help releasing a new version without breaking old import paths. |
We can close this issue. With this commit gorilla/sessions@12bd476 the memory leak is gone. It had nothing to do with the std lib. Here is the memory usage from the last 24 hours. As you can see it remains flat after the update just before 6 pm. |
@zemirco I think it worth updating your SO question or maybe putting a comment on it. (Glad you've got this issue sorted!) |
What version of Go are you using (
go version
)?Does this issue reproduce with the latest release?
yes
What operating system and processor architecture are you using (
go env
)?go env
OutputWhat did you do?
I've got a web server. The server is directly exposed to the internet. There is no proxy in front of it. Here is a minimal version.
The web server runs inside docker.
What did you expect to see?
I expected to see a more or less steady memory consumption.
What did you see instead?
I saw a steady growth in memory usage. Here is a screenshot for the last 30 days.
Restarts happened between October 21 and October 28 where the memory didn't have time to grow as much. Before October 21 you can see the pattern where memory grows until a certain point and drops sharply afterwards.
And here is one for the last 6 hours.
The time span between 3 and 7 am is the most interesting. After that I tried various things and killed the server. The server doesn't get a lot of traffic at the moment. I think the almost perfect linear growth is due to an AWS Health Check.
Every 30 seconds AWS sends a request to my server to make sure it is still running.
Here is the output of
top5
from pprof.A visual representation
The exact lines in the code.
Before opening an issue here I tried Stack Overflow https://stackoverflow.com/questions/53189316/golang-web-server-leaking-memory-at-crypto-tls-block-reserve. We couldn't find a solution but various people told me to open an issue. Another user (https://serverfault.com/users/126632/michael-hampton) even said he sees the same issue.
Any ideas?
If you need further information please let me know.
The text was updated successfully, but these errors were encountered: