New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Passenger Core increasing memory usage #1728
Comments
Disabling max-requests didn't seem to fix this. We are also noticing that average response times go up dramatically along with memory growth: graph |
We haven't seen #1726 since updating our passenger to the version with configurable backlog. Has there been any progress on this issue? Is there anything information we can provide to facilitate its resolution? We are now reloading nginx/passenger daily do deal with the PassengerCore memory issue. |
So far we haven't found any leads on where this growing memory usage comes from. There are a few places of suspect (e.g. the queue between the AcceptLoadBalancer and the core controllers) but they don't seem to be significant enough to explain the kind of memory growth you're seeing. We're continuing to run tests. |
Having a reproducible case would help us a lot. |
@jwilm we've just found and fixed a memory leak in commit fe87178. We're not sure if it's related at all, but it occurs when dealing with (relatively) large request or response bodies on a loaded server (when the internal Passenger memory buffer overflows to disk). The fix will be part of 5.0.27, but we just wanted to give you a heads up in case you would like to try it already (it's a pretty tiny patch). |
Thanks for the heads up; we'll update once it lands. |
@jwilm 5.0.27 is out, can you re-check? |
We have found (and fixed) another memory leak: #1797 This one is bigger but only triggers when there are more than 1024 concurrent requests. Maybe this is the leak that was affecting you? |
That sounds likely! Whenever we add a new server, it doesn't exhibit the problem until it's under the same load as the rest of our servers. Each of them handles ~ 60k RPM (using fewer, less powerful servers than when this issue was originally filed), and the traffic comes in bursts. |
When do you expect the latest memory leak patch to land in the ubuntu passenger ppa? We're very excited to try it out! |
@jwilm We're trying to get 5.0.28 out next week. |
@jwilm 5.0.28 has been released, can you check it out? :) |
Huzzah! I'll upgrade our front-end servers this afternoon. We're passed our high traffic point for today; providing a report will need to wait until tomorrow or the next day at the earliest. Thanks for getting that out! |
Our servers have been upgraded. I'll let you know how it goes in a day or two! |
Here's memory usage on one of our frontend servers for the last 24 hours: You can see the leak causing increased usage leading up to when we upgraded. After the upgrade, memory usage has been relatively flat. Here's the PassengerAgent memory usage for the same period: and for the period after the upgrade: The major memory leak seems to be resolved, and we are quite happy with how passenger has performed since the fix. Given the last graph, it appears there may still be a minor leak (~30mb over 20 hours), but it's small enough to not be an issue. Thanks for tracking this down and getting a patch out! |
@jwilm great to see the big leak plugged! Would it be possible for you to check once more a week from now to see how the (minor) increased memory usage develops? If there is still a leak somewhere we'd like to open a new issue for that and hunt it down as well. |
@jwilm can you provide us with some final feedback on this? How is the behavior across multiple days/weeks? I'll go ahead and close it already since the major leak was indeed fixed in 5.0.28. |
Reported on the forum.
The Passenger Core process shows increasing mem usage of 5-8G daily:
after 20 hours: graph | passenger-memory-stats | passenger-status --show=server
after 4 days: graph
another run: passenger-status
Passenger version 5.0.21
Rails 2.2, running in single threaded mode
400 Rails instances
80,000 RPM Average throughput, with peaks reaching 110,000 RPM.
Ubuntu 14.04 on a bare-metal server (20 cores w/ hyperthreading)
Passenger max. queue size is set to 80.000, max requests to 100.000. This is the same server as where #1726 is occurring (Error 11 while connecting to upstream).
The text was updated successfully, but these errors were encountered: