New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
God Daemon taking up huge amounts of memory #1126
Comments
I'm starting to see the same behavior. PM2 Deamon & PM2 Keymetrics is slowly increasing their memory footprint over time. Running PM2 v0.12.10 with node v0.12.2 on a Digital Ocean Droplet (Ubuntu 14.04). |
Did you integrate pmx as well in your application ? Just trying to figure out whether the issue might come from the pm2 or pmx :) |
@jorge-d No, I haven't used pmx - only pm2 with keymetrics. |
@danieljuhl @mattstrayer how many applications do you use? In which mode (cluster/fork)? Do you processes constantly restart? If you still have the pm2 logs (~/.pm2/pm2.log) would appreciate to know if something strange happened. We are using it at Keymetrics to manage processes on 6 different servers (Ubuntu 14.04, Node 0.12.2) running 10 applications (4 in cluster mode and 6 in fork mode) and we don't see this behavior. |
I've seen the issue on an instance which runs two apps: the built-in http service and one other application in simple fork mode ("pm2 start app.js"). That application is a heavy backend app, but does not restart during the test period, where memory is increased for pm2. I use the max memory restart option, but during the test period, that treshold has not been reached. I'll have a look at the pm2 logs. |
I got some issues with pmx too, my application got memory leak when using pm2 0.12, so I checked with heapdump, found that a object using domain was the leak source, and seems related with pmx, so I disabled pmx init in processContainer.js and everything goes fine. I didn't work out a min demo for this case due to time reason, but just give a hint. |
Hi, I also have same issue; CentOS6 32, pm2@0.12.11, node@0.12.2, I'm running one application in fork mode, and according to pm2 monit it eats just 64MB, but htop shows that there are 5 child "God Daemon" processes, as well as 5 copies of my apps running, which is in overall takes about 40% of all memory. |
It's been fixed with the 0.12.11, we now force the GC, now PM2 always stay at around 50mb (4 benchmarks on 4 machines with bad application behaviors) |
Yeah I'm sorry, spend all day on investigating this thing, came out that this was unix caches that was growing... after clearing caches everything runs smoothly, thanks for the great product! |
Thank you Alexandre. From: Alexandre Strzelewicz [mailto:notifications@github.com] It's been fixed with the 0.12.11, we now force the GC, now PM2 always stay at around 50mb (4 benchmarks on 4 machines with bad application behaviors) @angrycoding https://github.com/angrycoding #1088 #1088 — |
👎 force gc is not a good idea, and need user to expose gc which might be abused. I strongly recommand that we should look into pmx #1126 (comment) rather than trigger gc manually. |
I'm using version 12.12 and I still have this issue. The amouth of memory used by my application gets bigger and bigger. It seems that garbagge collection is not working at all. |
Using 0.15.10 and it also blows up. One app is all it runs: pm2 start /opt/node/latest/app.js |
FYI I'm troubleshooting this here, I think in my case the issue is that pm2 is being run as root and ubuntu users alternately. I'm using Amazon CodeDeploy, where the lifecycle scripts ApplicationStart and ValidateService are 'accidentally' being run as
So there are two God processes, the one owned by |
Same issue here on Ubuntu 14.04, PM2 1.1.3 and node 4.4.7. The problem exists even without running any app |
please screenshot htop, also install the new PM2 v2:
|
I was using latest pm2 downloaded a 2 days ago. Somehow I've managed to fix it just by clearing all the cache, dumps, logs, and startup scripts. Can't reproduce the problem anymore |
permission problems may happen sometimes you need to check that ~/.pm2 has the right permissions $ pm2 save
$ chown -R $USER:$USER ~/.pm2
$ pkill -f PM2
$ pm2 resurrect |
hey this issue is occuring again on PM2 v5.1.0 what to do? it takes around 400 mb of ram |
I just experienced this issue and found the root cause is NodeJS backpressure. Basically, pm2 failed to flush the log to disk in time (too much log output). And this issue was solved by reducing log output. |
vizion cause use large memory , you can see the issue: #5051 you can use https://pm2.io/docs/runtime/reference/pm2-cli/ https://pm2.keymetrics.io/docs/usage/application-declaration/ |
The app itself runs at about 60-70mb of memory. Running on a small Digital Ocean Droplet (14.04 Ubuntu), but the god Daemon takes up the rest of the 512mb of memory and clogs the system. Node is running v0.12.
The text was updated successfully, but these errors were encountered: