Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak even with homepage request! #2779

Closed
duncan-dao-90 opened this issue Mar 24, 2015 · 52 comments
Closed

Memory leak even with homepage request! #2779

duncan-dao-90 opened this issue Mar 24, 2015 · 52 comments

Comments

@duncan-dao-90
Copy link

I create a new sails project, add some lines into the default homepage view:

<script>
    setTimeout(function(window.location.reload();), 2000);
</script>

I make the page auto reload every 2 seconds, and this causes my VPS memory usage keep increasing until the VPS crash because out of memory. I can't event ssh into the VPS and have to restart on cpanel.

Is this normal? If not, how can I fix this?
Thanks

@duncan-dao-90 duncan-dao-90 changed the title Memory leak only on homepage request! Memory leak even with homepage request! Mar 24, 2015
@crh3675
Copy link

crh3675 commented Mar 25, 2015

Odd bug, perhaps you have some off-course async code in your controllers, database connection issue? Are you experienced with async NodeJS (not facetious, seriously)? Some async operations will hang things if NodeJs calls are not coded correctly (eventually timeout is the desired approach).

@duncan-dao-90
Copy link
Author

@crh3675 This is a new instance of sails js, I didn't add any code except the one I mentioned above.
The memory is keep increasing. When I close a chrome tab, the memory does not decrease, which mean that memory used for that request hasn't returned. I'm really confused here, is this the normal behavior of nodeJS/sailsJS?

@crh3675
Copy link

crh3675 commented Mar 25, 2015

What is the function of your application? Care to share some code?

@duncan-dao-90
Copy link
Author

@crh3675 As I said above, it's just a brand new sails instance. All I do is add above javascript into default homepage of sails.

@crh3675
Copy link

crh3675 commented Mar 25, 2015

Are you running nGinx? Apache? You can't bind to lower ports unless you are root user. Is there a proxy in between?

@Alpha200
Copy link

I can confirm this issue with node.js 0.12 and Mac OS 10.10.2. Steps to reproduce:

  1. sails new MemoryTest
  2. sails lift
  3. Loadtest with https://www.npmjs.com/package/loadtest: loadtest -n 10000 -c 10 http://localhost:1337

Memory usage
Before: 78.016MB
After: 214.059 MB

@crh3675
Copy link

crh3675 commented Mar 31, 2015

Definitely your test shows memory increasing, perhaps try with apache bench since loadtest is a NodeJS app. http://httpd.apache.org/docs/2.2/programs/ab.html. Not that I don't trust your results, I am curious to see if somehow NodeJS is sharing memory between processes somehow with V8 and garbage collection.

@kanecko
Copy link

kanecko commented Mar 31, 2015

I can confirm this issue with node.js 0.12 and Windows 8.1. Steps to reproduce:

  1. sails new MemoryTest
  2. sails lift
  3. apache ab -n 10000 -c 10 http://localhost:1337

Memory usage
Before: 80 MB
After: 273 MB

@virteman
Copy link

@kanecko have u waited for gc ? Or U can add this in app.js:

setInterval(function(){
    console.log( 'NOW gc .......' );
    global.gc();
}, 10000);

And try start sails like this, After that U can use ab test and then watch the memory usage.

$ node --expose_gc app.js 

@kanecko
Copy link

kanecko commented Apr 5, 2015

I've now tried my test again where I've waited for gc, but the mem usage just went up and up. Every 10k requests adds about 100 MB for me.

@crh3675
Copy link

crh3675 commented Apr 5, 2015

Wait a second.... This code is broken:

<script>
    setTimeout(function(window.location.reload();), 2000);
</script>

and will KILL any application because you are calling reload non-stop which is creating your memory leak:

should be

<script>
    setTimeout(function() { 
       window.location.reload(); 
    }, 2000);
</script>

@tjwebb tjwebb added the question label Apr 7, 2015
@tjwebb
Copy link
Contributor

tjwebb commented Apr 7, 2015

Any update on this?

@crh3675
Copy link

crh3675 commented Apr 7, 2015

I would have to sat "bad code" created this problem as per my last comment. Unless the front-end code is actually different, this pain might have been self-inflicted :-)

@tjwebb
Copy link
Contributor

tjwebb commented Apr 7, 2015

what about @anhdd-savvycom ?

@kanecko
Copy link

kanecko commented Apr 8, 2015

I don't use any javascript in my homepage and I get the same issue.

@tjwebb
Copy link
Contributor

tjwebb commented Apr 8, 2015

This needs a failing unit tests. Theoretically, requesting / in a loop a thousand times should crash the server, yes?

@crh3675
Copy link

crh3675 commented Apr 8, 2015

Don't use Javascript on the home page? Then what is this original post about?

<script>
    setTimeout(function(window.location.reload();), 2000);
</script>

Where did you add that code (EJS file)? You can't add that to Sails server side code, it does nothing but blow things up (most likely).

@robertrossmann
Copy link
Contributor

I did some heavy-duty load testing on my machine and found out that after approximately 200k requests, the memory usage stops increasing. Below you can find ps aux snapshots taken at 100k-request intervals.

# Initial state
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
rossmr01 30472  1.0  0.4 739332 80416 pts/12   Sl   12:26   0:00 node /usr/local/bin/sails lift

# After 100000 requests
USER       PID %CPU  %MEM    VSZ     RSS TTY    STAT START   TIME COMMAND
rossmr01 30472 52.0  6.9 1792776 1136936 pts/12 Sl   12:26   2:40 node /usr/local/bin/sails lift

# After another 100000 requests
USER       PID %CPU  %MEM    VSZ     RSS TTY    STAT START   TIME COMMAND
rossmr01 30472 63.9  9.2 2167348 1511572 pts/12 Sl   12:26   5:28 node /usr/local/bin/sails lift

# After yet another 100000 requests
USER       PID %CPU  %MEM    VSZ     RSS TTY    STAT START   TIME COMMAND
rossmr01 30472 64.3  9.2 2166736 1510140 pts/12 Sl   12:26   8:22 node /usr/local/bin/sails lift

Fresh Sails app with initial configuration. Grunt hook has been disabled.

Host:
Linux localhost 3.13.0-48-generic #80-Ubuntu SMP Thu Mar 12 11:16:15 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

Node: 0.12.2

@tjwebb
Copy link
Contributor

tjwebb commented Apr 9, 2015

Below you can find ps aux snapshots taken at 100k-request intervals.

ps aux is not the correct way to measure memory usage in node. The RAM usage that the OS reports is not at all indicative of whether a memory leak might or might not exist. More useful is the output of process.memoryUsage().

@robertrossmann
Copy link
Contributor

Fair enough. Here's a graph I plotted from process.memoryUsage().

Source: Google Drive

Code that generated the snapshot:

// I put this in the config/bootstrap.js file

var fs = require('fs')
  , snapshots = []

// WARNING - terrible implementation, use fs.createFileStream() instead!
// (see note below)
setInterval(function takeSnapshot() {
  var mem = process.memoryUsage()
  mem.timestamp = Date.parse(new Date) / 1000  // Unix timestamp
  snapshots.push(mem)
}, 1000)  // Snapshot every second

// On exit, dump the snapshots into a json file
process.on('exit', function () {
  fs.writeFileSync('./memorysnapshot.json', JSON.stringify(snapshots), 'utf8')
})

Update: The code to monitor memory usage is absolutely terrible, because it creates a new object every second in memory and only empties them into file once the process terminates. This could lead to false-positive results while investigating memory leaks. I am keeping it here for reference as to how not to monitor memory usage. Proper way would be using a file stream and writing the data directly to the file every second.

Stress test performed with ApacheBench:
ab -c 500 -n 200000 http://localhost:1337/

I am inconclusive on the results; to my understanding, rss and heapTotal may be freed by system whenever another process requests memory, all the way down to heapUsed, which represents the actual amount of data currently used by v8. However, even heapUsed increases slightly over time. The jagged edges are likely the result of v8 performing garbage collection.

@robertrossmann
Copy link
Contributor

I took three heapdumps using node-heapdump.

  1. At the start of Sails
  2. After 100k hits
  3. After 200k hits

Doing comparison between second and third shows us where the memory is being continuously spent (there's a lot of memory allocation/deallocation as v8 performs optimisations during the initial 100k hits which is of little interest to us).

My preliminary and very shallow research suggests there might be some memory leaks in the router, but my skills at analysing these dumps and knowledge of Sails' internals are too low to properly track these down.

A good article about memory analysis has been posted aeons ago by StrongLoop.

@mikedevita
Copy link
Contributor

not sure if this is relevant or not but since Sails is built on express.. Netflix wrote a blog post a few months back about express memory consumption because of the route handler.

Its an interesting article and may/may not provide some insight into the issue?

Node.js In Flames - Netflix

@robertrossmann
Copy link
Contributor

Ha! It's quite obvious, actually.:)

  1. What does a fresh Sails app use as session storage? (hint: yromem)
  2. What happens when you make a request to a Sails app without a cookie? (hint: detaerc si noisses wen)

The problem is that we have all been bashing at Sails with tools like ApacheBench to stress-test the application and observe how our memory goes over the roof. The problem is, each of these requests created a new session in Sails.

To accurately measure memory usage of a fresh Sails application, we must first disable sessions (or find a way to re-use a valid session via cookie). Once we do that, we will see this memory usage pattern:

Conclusion: No indication of a memory leak.

For reference, here are also heapdumps for the stress-test I performed without having sessions enabled if one is so inclined to study them in detail.

@crh3675
Copy link

crh3675 commented Apr 10, 2015

@Alaneor nice work - very helpful. funny thing is that I think that @anhdd-savvycom wrote some bad code that spurred all this hard work. Thanks everyone for contributing

@lvsenlin
Copy link

lvsenlin commented May 7, 2015

Finally what is the answer? I have the same problem.

@robertrossmann
Copy link
Contributor

I do not believe there is any memory leak present in the default Sails application.

If you are seeing a constatnly increasing memory footprint then my suggestion is to switch to a database-backed session store (Sails uses in-memory session store by default).

Additionally, you can see from the graph above that the memory usage could get as high as 250 MB, so unless you are seeing gigabytes or RAM consumed, there's no need to panic (yet).

@lvsenlin
Copy link

lvsenlin commented May 7, 2015

I test to create a new project like this: 'sails new sailsTest'.It's a empty new project,run it.
Refresh the homepage slow,the memory don't grow.But if still press CMD+R very quickly,the memory growth a few and not lost again.

I guess maybe the response object don't release.

Looking forward to reply.

@robertrossmann
Copy link
Contributor

That's not a test, really. It's absolutely natural for a Node.js application to consume increasingly more memory immediately after you start the process. It generates optimised code and various other stuff.

If you want to really check for memory leaks, first get rid of in-memory session stores, then do a "warm-up" (generate at least several hundred requests) and then measure your memory usage using process.memoryUsage().

Just look at the graph above. The first spike in memory usage is what happens when I hit the application with a thousand requests per second.

@tjwebb
Copy link
Contributor

tjwebb commented May 7, 2015

Look guys, if you're nervous that some line chart appears high in your opinion, that's not a bug. Node allocates memory for all kinds of reasons, and trying to divine all these reasons is only going to lead to confusion. I'm not convinced until you've been running a process for weeks or months, and the memory slowly increases until the machine crashes; or, if you can write a test case that reproduces the issue and you can reliably cause a memory explosion to like 10GB or something.

@tjwebb tjwebb closed this as completed May 7, 2015
@lvsenlin
Copy link

lvsenlin commented May 7, 2015

I test use mongodb to save session,still the same.

@tjwebb
Copy link
Contributor

tjwebb commented May 7, 2015

Like @Alaneor said,

unless you are seeing gigabytes or RAM consumed, there's no need to panic

@lvsenlin if you're seeing this only with mongo, then try filing an issue in the mongo adapter repository.

@lvsenlin
Copy link

lvsenlin commented May 7, 2015

OK,thanks.Trouble you.

@ghost
Copy link

ghost commented Jul 24, 2015

I know this is a slightly old issue, however, we had a similar issue when running sails in cluster mode with PM2. We kept on seeing requests for sessions on Redis, just sitting there doing nothing and leaking memory. It turns out that it was the sockets/pubsub hook. We disabled those in the .sailsrc file and a significant memory leak reduction was achieved. For our use case we don't need socketio or pubsub so we are happy.

@lvsenlin
Copy link

{
"generators": {
"modules": {}
},
"hooks": {
"sockets": false,
"pubsub": false
}
}
Like this?

@crh3675
Copy link

crh3675 commented Feb 26, 2016

I just saw some crazy behavior as well with a single Sails API using 0.12-rc4. The app was consuming 1.6GB ram. I disabled pubsub, grunt, i18n and sessions within .sailsrc and now the app uses comfortable 80MB. This is running on Centos 6 connecting to a local mysql database.

"dependencies": {
    "include-all": "~0.1.3",
    "lodash.camelcase": "^3.0.1",
    "rc": "~0.5.0",
    "sails": "~0.12.0-rc4",
    "sails-disk": "^0.10.8",
    "sails-mysql": "^0.11.2",
    "sails-redis": "^0.10.6",
    "uuid": "^2.0.1"
}

Not that I used the no-frontend option so I don't need all those. Just a simple REST API

@mikedevita
Copy link
Contributor

probably grunt.

Sent from my iPhone

On Feb 26, 2016, at 3:20 PM, Hoovinator notifications@github.com wrote:

I just saw some crazy behavior as well with a single Sails API using 0.12-rc4. The app was consuming 1.6GB ram. I disabled pubsub, grunt, i18n and sessions within .sailsrc and now the app uses comfortable 80MB.


Reply to this email directly or view it on GitHub.

@crh3675
Copy link

crh3675 commented Feb 27, 2016

Grunt was already disabled :-( The others I just added. So it can't be grunt. I guess the only true test is to step through each hook and see which causes the issue. Unless someone has a better solution.

@crh3675
Copy link

crh3675 commented Mar 16, 2016

Still having the issues with memory climbing. Nowhere near where it was but after about 7 days, the basic Sails no-frontend API starts around 90MB and is now 962MB.

@particlebanana
Copy link
Contributor

@crh3675 looking into a few of the hooks in #3638 but we haven't seen any memory issues on production servers (unless they happened to be running node 0.12 which blew through memory).

@crh3675
Copy link

crh3675 commented Mar 21, 2016

Note I am using PM2 as well across two instances using NodeJS 0.12.7. Perhaps an upgrade of NodeJS is in order again

@crh3675
Copy link

crh3675 commented Mar 29, 2016

Weeee! Add this to the main app.js file:

setInterval(function() {
   global.gc(); 
}, 30000);

Force garbage collection every 30 seconds

@maxiejbe
Copy link

@crh3675
To take into account:

  1. --expose-gc flag is needed when launching the node process
  2. Execution is paused until GC is completed. Performance can be affected if you run it so often.
    Look here

Is running every 30 seconds a good idea?
@tjwebb Maybe the Sails team should decide when to execute the GC strategically? Setting an x time interval and running it seems to be the easy fix.

@robertrossmann
Copy link
Contributor

Sails.js cannot (and should not) do this for the simple fact that

  1. GC execution frequency is best left at the discretion of v8, not the programmer
  2. Manual GC execution requires flag which Sails.js has no control over

This is not a solution. Manually executing GC will not help at all because a memory leak, by its very definition, means you have a piece of code that keeps allocating new resources which cannot be garbage-collected.

Additionally, it has been confirmed on several occasions in this thread that there is no memory leak in Sails and that the observed behaviour of increasing memory usage in a brand-new Sails app is attributed to the use of default session store, which uses in-memory storage space.

@maxiejbe
Copy link

I understand your point.

But the fact is that if you leave a basic API (No frontend, so discard grunt hook) with disabled session, sockets and pubsub hooks the memory is gradually increasing (and never seems to stop).

Even when session hook was enabled, I wasn't using the default memory store. I set up my mongo adapter and confirmed that it was saving them there.

I will be trying the forced GC execution. If memory usage seems to be stable with that fix, that means it's not a Sails.js leak. It might be related with the node version in that case.

@crh3675
Copy link

crh3675 commented Apr 11, 2016

@Alaneor, I have found that manually firing off V8 garbage collection is the only way to release the memory. The code I would have that keeps allocating new memory is not my code, it is bare-bones SailsJS using the sails-mysql 0.11.2 adapter. 90MB to over 1GB of accumulated RAM is quite astonishing and needs to be revisited as a SailsJS issue.

As far as running Garbage collection every 30 seconds, I haven't seen any issues with doing so.

@particlebanana
Copy link
Contributor

@crh3675 @maxiejbe I've spent a long time trying to find the phantom memory leaks claimed over the years. I have yet to find anywhere in the Sails/Waterline codebase where any leaks occur.

There are a few issues you can search for and read the results. The only way i've been able to get memory to grow is by using node 0.12, which has weird issues regardless of framework and when using a remote session store such as connect-redis and the remote database fills up.

@crh3675
Copy link

crh3675 commented Apr 11, 2016

Perhaps it is time to upgrade to 4.4 then

@mikermcneil
Copy link
Member

To be clear, we still investigate every claim to the best of our ability-- but the community around Sails has grown to the point now that lately we find ourselves investigating potential memory leaks at least once per month, if not more often. And as @particlebanana pointed out, despite spending many hours diving in on numerous occasions, we just haven't been able to replicate a leak (other than the Node v0.12-specific issue mentioned above). As you can imagine, it's gotten to the point where it's like the boy who cried wolf-- which isn't good for anyone.

There are definitely situations where memory leaks can occur in Sails apps (happened to me more than I'd care to talk about)-- it's just been my experience up until this point that those leaks come from either using non-production settings or from issues in app level code.

Something to keep in mind is that @particlebanana and I are running a couple of active apps on the latest stable version of Sails ourselves; thus we're very aware of production issues in the latest stable version of Sails (i.e. if they arise, they hit us very close to home). If a memory leak was found in Sails or Waterline core w/ recommended production settings, we'd expect to find out about it pretty quickly. But since we can't be 100% certain (because other adapters could be in play, Node version differences, etc) we have to check into this kind of thing every time.

So here's what I'd ask from everyone going forward:

Before reporting a potential memory leak

  • If you run into a suspected memory leak, the first thing to do is check that you're testing on an app using recommended production settings (see deployment/scalability docs on sailsjs.org.).
  • Next, check your app-level code (i.e. try to isolate the leak to a single endpoint). In my experience, it is very common w/ Node apps in general to end up with leaks from failing to handle errors (e.g. if you're using promises and forget to do a .catch()).
  • If you can't isolate the source of increasing memory usage to a particular endpoint or anything about your app code, then try replicating the leak in a brand new sails app running with no bells and whistles, with recommended production settings (see deployment docs on sailsjs.org). If you find an issue there, then please create an example repo that replicates the issue (and include the standard version info and instructions to reproduce).

Really appreciate the help! 🙇

@crh3675
Copy link

crh3675 commented Apr 25, 2016

After regressing the different versions of NodeJS, 0.12 is the culprit as 4.4 doesn't have the memory issues. We used Apache benchmark to hit a the Sails API endpoint 10,000 times with both versions of NodeJS. 0.12.7 didn't garbage collect for some reason whereas 4.4 did.

@mikermcneil
Copy link
Member

@crh3675 interesting... Thanks for the update!

@marspark
Copy link

I'm seeing the same memory increase issue with 4.4.7. I've tested with a fresh no-frontend project, disabled session, grunt, socket. #3782

@mikermcneil
Copy link
Member

@marspark Thanks for opening a new issue- I responded here: #3782 (comment)

For posterity: I also added instructions at that link (^^) on how to reproduce. Re: reporting memory leaks, see my comment above in this thread. If anyone reading this suspects a memory leak, please run through those instructions and let us know ASAP in a new issue. Thanks!

@balderdashy balderdashy locked and limited conversation to collaborators Aug 9, 2016
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Development

No branches or pull requests