Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory Leak #1447

Closed
pawel-lmcb opened this issue Mar 23, 2019 · 21 comments
Closed

Memory Leak #1447

pawel-lmcb opened this issue Mar 23, 2019 · 21 comments

Comments

@pawel-lmcb
Copy link

Before filling this issue, please read the manual (https://glances.readthedocs.io/en/latest/) and search if the bug does not already exist in the database (https://github.com/nicolargo/glances/issues).

Description

We are seeing memory indefinitely increase, even with dropping history size to 1 from 28800. I tried to profile memory consumption and realized that memory was being increased inside the update method in stats.py. I made a minor adjustment and commented out self._plugins[p].update_stats_history() and self._plugins[p].update_views(). This helped things, however, I am still seeing memory increase just much slower.

I've attached a graph visually showing the memory consumption on an hourly basis, and you can see after I made the change the behavior changed significantly. Memory consumption is increasing most on opensuse 42.3 and debian stretch.

Let me know if you need any additional information.

Versions

Glances v3.1.0 with psutil v5.6.1

Graph

Screen Shot 2019-03-23 at 1 19 19 PM

@Javinator9889
Copy link

In addition to this issue, I also noticed that the web version sometimes keeps running other sessions (but there is no client watching them) and, after sometime, a deadlock state occurs (about 26 average - over a theorycal maximum of 4) and a Glances Web Service restart must be done

Experienced this on Debian 9.7

@pawel-lmcb
Copy link
Author

I narrowed this down to the requests lib, for whatever reason that library is leaking. As to why commenting out 2 of the other functions reduced the memory leak amount I am not sure.

Since I'm running this across a few operating systems, I am going to see whether pinning the requests lib to a particular version on the worse off ones reduces this leak.

https://github.com/kennethreitz/requests/issues/4934

@pawel-lmcb
Copy link
Author

FYI here is a result proving that requests lib is leaking (we used memory_profiler https://pypi.org/project/memory-profiler/):

Line # Mem usage Increment Line Contents

76  36.5703 MiB  36.5703 MiB       @profile(stream=fp, precision=4)
77                                 def flush(self):
78  36.5703 MiB   0.0000 MiB         timeout = 5
79  36.5703 MiB   0.0000 MiB         self.bulk['metadata'] = self.metadata
80  36.5703 MiB   0.0000 MiB         self.bulk['sent_at'] = str(datetime.datetime.utcnow())
81  36.5703 MiB   0.0000 MiB         if 'TEST' in os.environ:
82                                     f = open('/tmp/glances-out', 'w')
83                                     f.write(json.dumps(self.bulk))
84                                     f.close()
85                                     os._exit(0)
86                                   else:
87  36.5703 MiB   0.0000 MiB           try:
88  36.7266 MiB   0.1562 MiB               r = requests.post(self.http_endpoint, json=self.bulk, headers=self.headers, timeout=timeout)
89                                     except Exception as e:
90                                         logger.debug('export http - Cannot connect to the endpoint {}: {}'.format(self.http_endpoint, e))
91  36.7266 MiB   0.0000 MiB         self.bulk = {}

I swapped out requests with urllib3 here, I'll post results when this runs overnight and we collect a couple hours worth of data.

@pawel-lmcb
Copy link
Author

So I just wanted to update this, in case anyone else has this problem. As I mentioned I commented out some lines, and this reduced the memory leak we saw.

However we were receiving memory leaks based on the requests library as posted above, and I decided to drop the requests library for urllib3. This further reduced the memory leak but did not solve it.

I then realized using memory_profiler that the basic json library was leaking (json.dumps before I shoved self.bulk into urllib3's post method) as well as urllib3.

So I decided to use ujson, and this resolved the json memory leak, however, urllib3 was still leaking. I then noticed that some of my instances were not leaking memory, and so I upgraded urllib3 to 1.24.1 which resolved any memory leaks.

So long story short, urllib 1.24.1 + ujson 1.35 and posting any amount of Glances dict into an API endpoint works great and poses no memory leaks. I'll attempt to revert to requests to see whether I can find a version that does not leak as well.

@pawel-lmcb
Copy link
Author

Screen Shot 2019-03-26 at 8 05 39 PM

Just want to show our results, purple (bottom most) urllib==1.24.1 orange (top most) urllib==1.19.1

@ghost
Copy link

ghost commented Dec 8, 2020

The memory usage of glances constantly rises over time:

Glances v3.1.5 with PsUtil v5.7.0
Glances arguments: -t1
Python: CPython 3.7.9

TIMESTAMP             VIRT    RSS
2020-12-08 20:20:01: 204852  45044
2020-12-08 20:25:01: 213020  53076
2020-12-08 20:30:01: 220816  61100
2020-12-08 20:35:01: 228876  68988
2020-12-08 20:40:01: 236672  76944
2020-12-08 20:45:01: 244576  84880
2020-12-08 20:50:01: 252660  92796
2020-12-08 20:55:01: 260512 100760
2020-12-08 21:00:01: 268548 108648
2020-12-08 21:05:01: 276664 116816
2020-12-08 21:10:01: 284652 124912
2020-12-08 21:15:01: 292864 133096
2020-12-08 21:20:01: 300612 140796
2020-12-08 21:25:01: 308652 148904

RSS increase per unit of time: 100 MB/hour

Please try to find a fix for this issue. Thanks.

@webern
Copy link

webern commented Feb 28, 2022

Same here. I like to keep it running but it grows by 1GB per day.

@viseshrp
Copy link

viseshrp commented Nov 11, 2022

@nicolargo I have the same problem. I have set history_size to 0 but it's growing endlessly on a raspberry pi I have. Running 3.3.0.4

image

@viseshrp
Copy link

@nicolargo Sorry, but I cannot find a --disable-history option for the command line. Memory is at 32% now and my actual app is using only 7%.
image

Hope it doesn't kill my raspberry pi :) Please let me know if there is any other way to solve this. I also tried --disable-plugin with many plugins and it does not help. Thank you. Appreciate your work.

@nicolargo
Copy link
Owner

The --disable-history should be available on your version:

$ glances --help | grep history
               [--disable-history] [--disable-bold] [--disable-bg]
  --disable-history     disable stats history

With this option the memory consumption is more or less constant on my system:

glances-memory-profiling-without-history

@nicolargo
Copy link
Owner

I have added a new memory profiling test for Glances.

Here are the result when the history is enable (the memory grows until the history size and stays constant after that):

https://github.com/nicolargo/glances/blob/develop/docs/_static/glances-memory-profiling-with-history.png

and when the history is disable (using the --disable-history option):

https://github.com/nicolargo/glances/blob/develop/docs/_static/glances-memory-profiling-without-history.png

@viseshrp
Copy link

I have added a new memory profiling test for Glances.

Here are the result when the history is enable (the memory grows until the history size and stays constant after that):

https://github.com/nicolargo/glances/blob/develop/docs/_static/glances-memory-profiling-with-history.png

and when the history is disable (using the --disable-history option):

https://github.com/nicolargo/glances/blob/develop/docs/_static/glances-memory-profiling-without-history.png

Thank you very much. --disable--history has been working well for me. Memory is now currently constantly low.
The profiling tests are useful as well, but I'm curious to know what is expected when history_size is zero? Isn't it supposed to be the same as --disable-history as memory is expected to grow until history size, which is now zero?

@nicolargo
Copy link
Owner

Isn't it supposed to be the same as --disable-history as memory is expected to grow until history size, which is now zero?

It should...
I am going to test it.

@nicolargo
Copy link
Owner

On my system (with Glances develop branch), same behavor between --disable-history and history_size=0.

Figure_1

@viseshrp
Copy link

viseshrp commented Nov 13, 2022

On my system (with Glances develop branch), same behavor between --disable-history and history_size=0.

Figure_1

I see you stop it after a certain time. I let it run constantly to monitor my servers 24x7, so that could be the reason. 2500ms might not be a great test for this. I'd recommend watching it for at least a day to run into what people have mentioned here.

/opt/glances/venv/bin/glances --server --disable-webui --disable-history

@nicolargo
Copy link
Owner

Ah !!! You use Glances in server mode !
Perhaps the issue is in this mode, i will have a look.

@nicolargo nicolargo added the bug label Nov 14, 2022
@nicolargo nicolargo added this to the Glances 3.3.1 milestone Nov 14, 2022
@viseshrp
Copy link

Ah !!! You use Glances in server mode ! Perhaps the issue is in this mode, i will have a look.

That might not be the reason, unfortunately. I tried both webserver and server mode. It happens in both modes but I did find that the increase in memory is slower in server mode.

@nicolargo
Copy link
Owner

Just make a quick test (2 hours) in server mode (-s) with the --disable-history and the memory do not grow...

I am going to make a long test (24 hours).

@nicolargo
Copy link
Owner

nicolargo commented Nov 16, 2022

Reproduced with a long test in server mode (-s):

Figure_2

So 4 MB leak in 18h (200 KB per hour).
The memory continue to grow after the history_size=1200...

and same behavor in Web server mode (-w) with the --disable-history tag:

Figure_3

and also in standalone mode:

Figure_4

@nicolargo nicolargo changed the title Memory Leak 3.1.0 Memory Leak Nov 16, 2022
@pawel-lmcb
Copy link
Author

Right so going back to my original post, I don't think this is a glances problem but rather a memory leak with the underlying libraries that it uses.

The test above is a good spot to start, would it be possible to have the server reboot itself every hour ? I'd love to see what the memory used graph would look like.

@nicolargo
Copy link
Owner

No more memory leak after replacing json by ujson:

With history (memory increase and stabilize)

https://github.com/nicolargo/glances/blob/c330e07e34d8af5ad963e712f613fb2a562d99d1/docs/_static/glances-memory-profiling-with-history.png

and without history (stabilize)

https://github.com/nicolargo/glances/blob/c330e07e34d8af5ad963e712f613fb2a562d99d1/docs/_static/glances-memory-profiling-without-history.png

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants