-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak even with homepage request! #2779
Comments
Odd bug, perhaps you have some off-course async code in your controllers, database connection issue? Are you experienced with async NodeJS (not facetious, seriously)? Some async operations will hang things if NodeJs calls are not coded correctly (eventually timeout is the desired approach). |
@crh3675 This is a new instance of sails js, I didn't add any code except the one I mentioned above. |
What is the function of your application? Care to share some code? |
@crh3675 As I said above, it's just a brand new sails instance. All I do is add above javascript into default homepage of sails. |
Are you running nGinx? Apache? You can't bind to lower ports unless you are root user. Is there a proxy in between? |
I can confirm this issue with node.js 0.12 and Mac OS 10.10.2. Steps to reproduce:
Memory usage |
Definitely your test shows memory increasing, perhaps try with apache bench since loadtest is a NodeJS app. http://httpd.apache.org/docs/2.2/programs/ab.html. Not that I don't trust your results, I am curious to see if somehow NodeJS is sharing memory between processes somehow with V8 and garbage collection. |
I can confirm this issue with node.js 0.12 and Windows 8.1. Steps to reproduce:
Memory usage |
@kanecko have u waited for gc ? Or U can add this in app.js: setInterval(function(){
console.log( 'NOW gc .......' );
global.gc();
}, 10000); And try start sails like this, After that U can use ab test and then watch the memory usage. $ node --expose_gc app.js
|
I've now tried my test again where I've waited for gc, but the mem usage just went up and up. Every 10k requests adds about 100 MB for me. |
Wait a second.... This code is broken:
and will KILL any application because you are calling reload non-stop which is creating your memory leak: should be
|
Any update on this? |
I would have to sat "bad code" created this problem as per my last comment. Unless the front-end code is actually different, this pain might have been self-inflicted :-) |
what about @anhdd-savvycom ? |
I don't use any javascript in my homepage and I get the same issue. |
This needs a failing unit tests. Theoretically, requesting |
Don't use Javascript on the home page? Then what is this original post about?
Where did you add that code (EJS file)? You can't add that to Sails server side code, it does nothing but blow things up (most likely). |
I did some heavy-duty load testing on my machine and found out that after approximately 200k requests, the memory usage stops increasing. Below you can find # Initial state
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
rossmr01 30472 1.0 0.4 739332 80416 pts/12 Sl 12:26 0:00 node /usr/local/bin/sails lift
# After 100000 requests
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
rossmr01 30472 52.0 6.9 1792776 1136936 pts/12 Sl 12:26 2:40 node /usr/local/bin/sails lift
# After another 100000 requests
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
rossmr01 30472 63.9 9.2 2167348 1511572 pts/12 Sl 12:26 5:28 node /usr/local/bin/sails lift
# After yet another 100000 requests
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
rossmr01 30472 64.3 9.2 2166736 1510140 pts/12 Sl 12:26 8:22 node /usr/local/bin/sails lift Fresh Sails app with initial configuration. Grunt hook has been disabled. Host: Node: 0.12.2 |
|
Fair enough. Here's a graph I plotted from Source: Google Drive Code that generated the snapshot: // I put this in the config/bootstrap.js file
var fs = require('fs')
, snapshots = []
// WARNING - terrible implementation, use fs.createFileStream() instead!
// (see note below)
setInterval(function takeSnapshot() {
var mem = process.memoryUsage()
mem.timestamp = Date.parse(new Date) / 1000 // Unix timestamp
snapshots.push(mem)
}, 1000) // Snapshot every second
// On exit, dump the snapshots into a json file
process.on('exit', function () {
fs.writeFileSync('./memorysnapshot.json', JSON.stringify(snapshots), 'utf8')
})
Stress test performed with ApacheBench: I am inconclusive on the results; to my understanding, rss and heapTotal may be freed by system whenever another process requests memory, all the way down to heapUsed, which represents the actual amount of data currently used by v8. However, even heapUsed increases slightly over time. The jagged edges are likely the result of v8 performing garbage collection. |
I took three heapdumps using node-heapdump.
Doing comparison between second and third shows us where the memory is being continuously spent (there's a lot of memory allocation/deallocation as v8 performs optimisations during the initial 100k hits which is of little interest to us). My preliminary and very shallow research suggests there might be some memory leaks in the router, but my skills at analysing these dumps and knowledge of Sails' internals are too low to properly track these down. A good article about memory analysis has been posted aeons ago by StrongLoop. |
not sure if this is relevant or not but since Sails is built on express.. Netflix wrote a blog post a few months back about express memory consumption because of the route handler. Its an interesting article and may/may not provide some insight into the issue? |
Ha! It's quite obvious, actually.:)
The problem is that we have all been bashing at Sails with tools like ApacheBench to stress-test the application and observe how our memory goes over the roof. The problem is, each of these requests created a new session in Sails. To accurately measure memory usage of a fresh Sails application, we must first disable sessions (or find a way to re-use a valid session via cookie). Once we do that, we will see this memory usage pattern: Conclusion: No indication of a memory leak. For reference, here are also heapdumps for the stress-test I performed without having sessions enabled if one is so inclined to study them in detail. |
@Alaneor nice work - very helpful. funny thing is that I think that @anhdd-savvycom wrote some bad code that spurred all this hard work. Thanks everyone for contributing |
Finally what is the answer? I have the same problem. |
I do not believe there is any memory leak present in the default Sails application. If you are seeing a constatnly increasing memory footprint then my suggestion is to switch to a database-backed session store (Sails uses in-memory session store by default). Additionally, you can see from the graph above that the memory usage could get as high as 250 MB, so unless you are seeing gigabytes or RAM consumed, there's no need to panic (yet). |
I test to create a new project like this: 'sails new sailsTest'.It's a empty new project,run it. I guess maybe the response object don't release. Looking forward to reply. |
That's not a test, really. It's absolutely natural for a Node.js application to consume increasingly more memory immediately after you start the process. It generates optimised code and various other stuff. If you want to really check for memory leaks, first get rid of in-memory session stores, then do a "warm-up" (generate at least several hundred requests) and then measure your memory usage using Just look at the graph above. The first spike in memory usage is what happens when I hit the application with a thousand requests per second. |
Look guys, if you're nervous that some line chart appears high in your opinion, that's not a bug. Node allocates memory for all kinds of reasons, and trying to divine all these reasons is only going to lead to confusion. I'm not convinced until you've been running a process for weeks or months, and the memory slowly increases until the machine crashes; or, if you can write a test case that reproduces the issue and you can reliably cause a memory explosion to like 10GB or something. |
I test use mongodb to save session,still the same. |
Like @Alaneor said,
@lvsenlin if you're seeing this only with mongo, then try filing an issue in the mongo adapter repository. |
OK,thanks.Trouble you. |
I know this is a slightly old issue, however, we had a similar issue when running sails in cluster mode with PM2. We kept on seeing requests for sessions on Redis, just sitting there doing nothing and leaking memory. It turns out that it was the sockets/pubsub hook. We disabled those in the |
{ |
I just saw some crazy behavior as well with a single Sails API using 0.12-rc4. The app was consuming 1.6GB ram. I disabled pubsub, grunt, i18n and sessions within .sailsrc and now the app uses comfortable 80MB. This is running on Centos 6 connecting to a local mysql database.
Not that I used the no-frontend option so I don't need all those. Just a simple REST API |
probably grunt. Sent from my iPhone
|
Grunt was already disabled :-( The others I just added. So it can't be grunt. I guess the only true test is to step through each hook and see which causes the issue. Unless someone has a better solution. |
Still having the issues with memory climbing. Nowhere near where it was but after about 7 days, the basic Sails no-frontend API starts around 90MB and is now 962MB. |
Note I am using PM2 as well across two instances using NodeJS 0.12.7. Perhaps an upgrade of NodeJS is in order again |
Weeee! Add this to the main app.js file:
Force garbage collection every 30 seconds |
@crh3675
Is running every 30 seconds a good idea? |
Sails.js cannot (and should not) do this for the simple fact that
This is not a solution. Manually executing GC will not help at all because a memory leak, by its very definition, means you have a piece of code that keeps allocating new resources which cannot be garbage-collected. Additionally, it has been confirmed on several occasions in this thread that there is no memory leak in Sails and that the observed behaviour of increasing memory usage in a brand-new Sails app is attributed to the use of default session store, which uses in-memory storage space. |
I understand your point. But the fact is that if you leave a basic API (No frontend, so discard grunt hook) with disabled session, sockets and pubsub hooks the memory is gradually increasing (and never seems to stop). Even when session hook was enabled, I wasn't using the default memory store. I set up my mongo adapter and confirmed that it was saving them there. I will be trying the forced GC execution. If memory usage seems to be stable with that fix, that means it's not a Sails.js leak. It might be related with the node version in that case. |
@Alaneor, I have found that manually firing off V8 garbage collection is the only way to release the memory. The code I would have that keeps allocating new memory is not my code, it is bare-bones SailsJS using the sails-mysql 0.11.2 adapter. 90MB to over 1GB of accumulated RAM is quite astonishing and needs to be revisited as a SailsJS issue. As far as running Garbage collection every 30 seconds, I haven't seen any issues with doing so. |
@crh3675 @maxiejbe I've spent a long time trying to find the phantom memory leaks claimed over the years. I have yet to find anywhere in the Sails/Waterline codebase where any leaks occur. There are a few issues you can search for and read the results. The only way i've been able to get memory to grow is by using node |
Perhaps it is time to upgrade to 4.4 then |
To be clear, we still investigate every claim to the best of our ability-- but the community around Sails has grown to the point now that lately we find ourselves investigating potential memory leaks at least once per month, if not more often. And as @particlebanana pointed out, despite spending many hours diving in on numerous occasions, we just haven't been able to replicate a leak (other than the Node v0.12-specific issue mentioned above). As you can imagine, it's gotten to the point where it's like the boy who cried wolf-- which isn't good for anyone. There are definitely situations where memory leaks can occur in Sails apps (happened to me more than I'd care to talk about)-- it's just been my experience up until this point that those leaks come from either using non-production settings or from issues in app level code. Something to keep in mind is that @particlebanana and I are running a couple of active apps on the latest stable version of Sails ourselves; thus we're very aware of production issues in the latest stable version of Sails (i.e. if they arise, they hit us very close to home). If a memory leak was found in Sails or Waterline core w/ recommended production settings, we'd expect to find out about it pretty quickly. But since we can't be 100% certain (because other adapters could be in play, Node version differences, etc) we have to check into this kind of thing every time. So here's what I'd ask from everyone going forward: Before reporting a potential memory leak
Really appreciate the help! 🙇 |
After regressing the different versions of NodeJS, 0.12 is the culprit as 4.4 doesn't have the memory issues. We used Apache benchmark to hit a the Sails API endpoint 10,000 times with both versions of NodeJS. 0.12.7 didn't garbage collect for some reason whereas 4.4 did. |
@crh3675 interesting... Thanks for the update! |
I'm seeing the same memory increase issue with 4.4.7. I've tested with a fresh no-frontend project, disabled session, grunt, socket. #3782 |
@marspark Thanks for opening a new issue- I responded here: #3782 (comment) For posterity: I also added instructions at that link (^^) on how to reproduce. Re: reporting memory leaks, see my comment above in this thread. If anyone reading this suspects a memory leak, please run through those instructions and let us know ASAP in a new issue. Thanks! |
I create a new sails project, add some lines into the default homepage view:
I make the page auto reload every 2 seconds, and this causes my VPS memory usage keep increasing until the VPS crash because out of memory. I can't event ssh into the VPS and have to restart on cpanel.
Is this normal? If not, how can I fix this?
Thanks
The text was updated successfully, but these errors were encountered: