-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory Leak #1447
Comments
In addition to this issue, I also noticed that the web version sometimes keeps running other sessions (but there is no client watching them) and, after sometime, a deadlock state occurs (about 26 average - over a theorycal maximum of 4) and a Glances Web Service restart must be done Experienced this on Debian 9.7 |
I narrowed this down to the requests lib, for whatever reason that library is leaking. As to why commenting out 2 of the other functions reduced the memory leak amount I am not sure. Since I'm running this across a few operating systems, I am going to see whether pinning the requests lib to a particular version on the worse off ones reduces this leak. |
FYI here is a result proving that requests lib is leaking (we used memory_profiler https://pypi.org/project/memory-profiler/): Line # Mem usage Increment Line Contents
I swapped out requests with urllib3 here, I'll post results when this runs overnight and we collect a couple hours worth of data. |
So I just wanted to update this, in case anyone else has this problem. As I mentioned I commented out some lines, and this reduced the memory leak we saw. However we were receiving memory leaks based on the requests library as posted above, and I decided to drop the requests library for urllib3. This further reduced the memory leak but did not solve it. I then realized using memory_profiler that the basic json library was leaking (json.dumps before I shoved self.bulk into urllib3's post method) as well as urllib3. So I decided to use ujson, and this resolved the json memory leak, however, urllib3 was still leaking. I then noticed that some of my instances were not leaking memory, and so I upgraded urllib3 to 1.24.1 which resolved any memory leaks. So long story short, urllib 1.24.1 + ujson 1.35 and posting any amount of Glances dict into an API endpoint works great and poses no memory leaks. I'll attempt to revert to requests to see whether I can find a version that does not leak as well. |
The memory usage of
Please try to find a fix for this issue. Thanks. |
Same here. I like to keep it running but it grows by 1GB per day. |
@nicolargo I have the same problem. I have set history_size to 0 but it's growing endlessly on a raspberry pi I have. Running 3.3.0.4 |
@nicolargo Sorry, but I cannot find a --disable-history option for the command line. Memory is at 32% now and my actual app is using only 7%. Hope it doesn't kill my raspberry pi :) Please let me know if there is any other way to solve this. I also tried --disable-plugin with many plugins and it does not help. Thank you. Appreciate your work. |
I have added a new memory profiling test for Glances. Here are the result when the history is enable (the memory grows until the history size and stays constant after that): and when the history is disable (using the --disable-history option): |
Thank you very much. |
It should... |
I see you stop it after a certain time. I let it run constantly to monitor my servers 24x7, so that could be the reason. 2500ms might not be a great test for this. I'd recommend watching it for at least a day to run into what people have mentioned here.
|
Ah !!! You use Glances in server mode ! |
That might not be the reason, unfortunately. I tried both webserver and server mode. It happens in both modes but I did find that the increase in memory is slower in server mode. |
Just make a quick test (2 hours) in server mode (-s) with the --disable-history and the memory do not grow... I am going to make a long test (24 hours). |
Right so going back to my original post, I don't think this is a glances problem but rather a memory leak with the underlying libraries that it uses. The test above is a good spot to start, would it be possible to have the server reboot itself every hour ? I'd love to see what the memory used graph would look like. |
No more memory leak after replacing json by ujson: With history (memory increase and stabilize) and without history (stabilize) |
Before filling this issue, please read the manual (https://glances.readthedocs.io/en/latest/) and search if the bug does not already exist in the database (https://github.com/nicolargo/glances/issues).
Description
We are seeing memory indefinitely increase, even with dropping history size to 1 from 28800. I tried to profile memory consumption and realized that memory was being increased inside the update method in stats.py. I made a minor adjustment and commented out self._plugins[p].update_stats_history() and self._plugins[p].update_views(). This helped things, however, I am still seeing memory increase just much slower.
I've attached a graph visually showing the memory consumption on an hourly basis, and you can see after I made the change the behavior changed significantly. Memory consumption is increasing most on opensuse 42.3 and debian stretch.
Let me know if you need any additional information.
Versions
Glances v3.1.0 with psutil v5.6.1
Graph
The text was updated successfully, but these errors were encountered: