-
-
Notifications
You must be signed in to change notification settings - Fork 278
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leaking #388
Comments
Hi, and thanks for reporting. To debug this, you'll have to use a debug build with the Go profiler enabled. I can provide you one if you don't know how to do. Feel free to contact me on Symfony's Slack. |
Hello, Kevin, thank you for reaching out to me. |
I had an assumption that logging would help not to eat up memory so fast but same happens event with logging.driver: none. We have around 500 topics and around 600-1000 subscribers. There are around 1-2 updates per second. |
I'm in the process of changing the logger to Uber Zap (we have known memory issues with Logrus, which is deprecated anyway). I'll open a PR soon. |
Hi @dunglas, do you have any timeframe when the next release will be done? We are realy waiting for this... :) |
Could you try the master version to check if it fixes the issue? |
Hi, |
Just noticed same thing on our server using Mercure in docker, it eventually uses up all the RAM then the server becomes unstable and we have to restart the container to get it back down to low memory usage. |
Hi! Could you try with the latest alpha version using Caddy? It should fix the issue but confirmation would be very welcome. Also, this new version has a built-in profiler. If the problem persists this will help a lot. |
Looks promising so far! Nice job @dunglas 👍 |
I added a documentation entry explaining how to profile the hub: #422 |
Thanks to the data provided by @aurelijusrozenas we identified the issue. It looks like the topic selector cache isn't cleared even when no subscribers use this topic selector anymore. The problem is probably somewhere near https://github.com/dunglas/mercure/blob/main/topic_selector.go#L94 |
I made some tests and unfortunately I'm not able to reproduce the problem yet locally. A script (JS or anything else) allowing to trigger the memory leak could help a lot. |
For the record, I plan to switch from our built-in cache implementation to risretto. This should fix the issue and improve memory usage for everyone. |
Hello!
I am observing strange mercure behaviour. It has around 1000 or less subscribers but it seems that memory consumption increases gradually until it eats up all available memory. Then processor usage sky-rocket and situation does not improve until restarted.
I started from 1 core and 2 gigs of ram. Then upped to 2 cores and 4 gigs. Now it runs on 2 cores and 8 gigs of ram. Behaviour seems repeat no matter the amount of ram we give it.
After restart situation gets back to normal.
I'm running official docker installation dunglas/mercure:v0.10.4
What should be expected memory usage? What can be done to debug the situation?
The text was updated successfully, but these errors were encountered: