Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

possible memory leak #5568

Closed
mkeremguc opened this issue Mar 9, 2015 · 27 comments
Closed

possible memory leak #5568

mkeremguc opened this issue Mar 9, 2015 · 27 comments

Comments

@mkeremguc
Copy link

using sails' config/bootstrap.js to seed some data like:

User.findOrCreate({id:1},{id:1, username: 'user1'}).exec(function(){});
User.findOrCreate({id:2},{id:2, username: 'user2'}).exec(function(){});
...
return cb();

depending the amount of data i seed there, the memory usage rises and i suspect a big memory leak. without any seed, sails stays around 120MB idle. when ~200 inserts, it goes up to 210MB. with more data, although the data seeded is around 4-5MB, i even see it goes more than 1GB and stays there on idle.

using sails-posgtresql as adapter. pm2 as process manager.

what i tried so far:
-parallelly inserting all the records and call bootstrap's cb when done with async library
-using bluebird instead of async
-create() instead of findOrCreate()
-using sails-disk as db
-not using pm2 and lifting directly

all same.. with my current seed data, memory usage goes over 1GB.
i don't know if this is related waterline or the adapter(tho, i tried both postgre and disk) or if bootstrap is not the place for seeding data ?.

any ideas ?

edit: node-v0.12.0, sails-v0.11.0, sails-postgresql-v0.10.14

edit: node-v0.11.16 - same result
node-v0.10.36 - dramatically reduced to ~200MB of RAM usage.

@tjwebb
Copy link
Contributor

tjwebb commented Mar 10, 2015

The question is, does it come back down eventually? sails might open a lot of connections with postgres and do all kinds of stuff that uses RAM. (btw, I've seen people complaining in the iojs gitter that node 0.12 uses a lot more RAM than 0.10)

I guess I'm looking for some more detailed measurements

@mkeremguc
Copy link
Author

it doesn't come back and stays in "active" memory of system.

few more tests:
node-v0.11.16 - same result
node-v0.10.36 - dramatically reduced to ~200MB of RAM usage.

so now we suspect it is nodejs issue or should it still get back and stay around 120MB like without any data seed?

@devinivy
Copy link

I would be very interested to know if there's a leak here. My suggestion,

  1. Do a bunch of creates to let the memory come into equilibrium if there's not actually a leak.
  2. Take a heap snapshot.
  3. Do a fixed number of creates.
  4. Take another heap snapshot.
  5. Look at the diff of the snapshots, and try to tell if you can see any objects sticking around proportionally to the number of creates you performed in step 3.

@particlebanana
Copy link
Contributor

Yeah if theres a leak we need to track it down but I haven't had it come up yet in production environments. Mind posting your bootstrap file here so we can see if it's anything blocking or leaking there? Don't use sails-disk in these tests, it keeps everything in memory and syncs to disk.

@mkeremguc
Copy link
Author

my bootstrap file is just a bunch of findOrCreates just like i posted. i use sails-postgresql. btw, for you who want replicate the issue:

  • install nodejs 0.12 or 0.11
  • generate some models with some attributes
  • use findOrCreate to insert lots of sample data (in my case, in bootstrap.js)
  • monitor the memory usage with your favorite tool (pm2 in my case)

@dmarcelino
Copy link
Member

Node 0.11 is definitely consuming more memory than 0.10. Below are the results of some load tests I ran on Travis (taken from https://github.com/balderdashy/sails-postgresql/issues/141#issuecomment-88208132):

Machine Node memory and disk adapters SQL Adapters Reference
Travis 0.10 ~ 150 MB ~ 150 MB build 19.2
Travis 0.11 ~ 270 MB ~ 740 MB build 19.1

@devinivy
Copy link

devinivy commented Apr 1, 2015

Node 0.11 is edge, right? We probably want to know about 0.10 and 0.12.

@tjwebb
Copy link
Contributor

tjwebb commented Apr 1, 2015

Node 0.11 and 0.12, and iojs are known to consume more RAM at idle than 0.10 and earlier. I wouldn't necessarily attribute this to a problem in sails or waterline.

@mkeremguc
Copy link
Author

it is not about consuming more ram at idle. memory usage rises up fast when you do db actions and it doesn't free back... like a memory leak

@tjwebb
Copy link
Contributor

tjwebb commented Apr 1, 2015

can you guys post output of process.memoryUsage() as you're running these tests? I use sails-postgres to move a lot of data around (millions of rows) and have never had issues with RAM usage

@dmarcelino
Copy link
Member

can you guys post output of process.memoryUsage() as you're running these tests?

@tjwebb, that's what my load test does.

@tjwebb
Copy link
Contributor

tjwebb commented Apr 1, 2015

Ah ok, sorry I missed that.

In general I'm really not interested in what pm2 or top or any other external tool says. Node might be using 1GB of memory, but might also have a perfectly good reason for not gc-ing that memory yet.

@dmarcelino
Copy link
Member

Node 0.12, sails-memory, load test results (idle is measured 1s after the last .find()):

Hitting the db with 100 requests...
 0: rss: 39.43 MB, heapTotal: 31.60 MB, heapUsed: 11.97 MB . . . . . . . . .
10: rss: 236.66 MB, heapTotal: 223.79 MB, heapUsed: 152.70 MB . . . . . . . . .
20: rss: 236.66 MB, heapTotal: 223.79 MB, heapUsed: 152.72 MB . . . . . . . . .
30: rss: 236.66 MB, heapTotal: 223.79 MB, heapUsed: 152.73 MB . . . . . . . . .
40: rss: 236.67 MB, heapTotal: 223.79 MB, heapUsed: 152.81 MB . . . . . . . . .
50: rss: 236.67 MB, heapTotal: 223.79 MB, heapUsed: 152.82 MB . . . . . . . . .
60: rss: 236.67 MB, heapTotal: 223.79 MB, heapUsed: 152.83 MB . . . . . . . . .
70: rss: 236.67 MB, heapTotal: 223.79 MB, heapUsed: 152.83 MB . . . . . . . . .
80: rss: 236.67 MB, heapTotal: 223.79 MB, heapUsed: 152.84 MB . . . . . . . . .
90: rss: 236.67 MB, heapTotal: 223.79 MB, heapUsed: 152.85 MB . . . . . . . . .
100: rss: 236.69 MB, heapTotal: 223.79 MB, heapUsed: 152.86 MB
time elapsed: 12060ms
idle: rss: 236.67 MB, heapTotal: 223.79 MB, heapUsed: 152.92 MB

https://travis-ci.org/balderdashy/waterline-adapter-tests/jobs/56787563#L604

@dmarcelino
Copy link
Member

Node 0.10, sails-memory, load test results (idle is measured 1s after the last .find()):

Hitting the db with 100 requests...
 0: rss: 35.76 MB, heapTotal: 28.06 MB, heapUsed: 10.54 MB . . . . . . . . .
10: rss: 137.54 MB, heapTotal: 129.42 MB, heapUsed: 99.08 MB . . . . . . . . .
20: rss: 151.48 MB, heapTotal: 141.76 MB, heapUsed: 110.03 MB . . . . . . . . .
30: rss: 153.74 MB, heapTotal: 143.81 MB, heapUsed: 105.53 MB . . . . . . . . .
40: rss: 158.10 MB, heapTotal: 148.97 MB, heapUsed: 116.46 MB . . . . . . . . .
50: rss: 170.33 MB, heapTotal: 161.33 MB, heapUsed: 127.39 MB . . . . . . . . .
60: rss: 154.10 MB, heapTotal: 144.83 MB, heapUsed: 68.52 MB . . . . . . . . .
70: rss: 154.98 MB, heapTotal: 144.83 MB, heapUsed: 79.55 MB . . . . . . . . .
80: rss: 132.77 MB, heapTotal: 122.13 MB, heapUsed: 39.78 MB . . . . . . . . .
90: rss: 105.54 MB, heapTotal: 95.35 MB, heapUsed: 32.93 MB . . . . . . . . .
100: rss: 87.71 MB, heapTotal: 77.81 MB, heapUsed: 27.38 MB
time elapsed: 12429ms
idle: rss: 87.71 MB, heapTotal: 77.81 MB, heapUsed: 27.41 MB

https://travis-ci.org/balderdashy/waterline-adapter-tests/jobs/56787565#L633

@dmarcelino
Copy link
Member

Not only node 0.12 consumes more memory it also has some issues letting it go. You can click the links to see results for all adapters.

@tjwebb
Copy link
Contributor

tjwebb commented Apr 1, 2015

Spinning up the adapter query and express middleware apparatus for the first request, binding functions/contexts, and caching stuff will use RAM. Looks like 95% of the ram increase occurs between requests 0 and 1. After that, it's steady, or decreasing.

To me, those numbers look perfectly normal.

@mkeremguc
Copy link
Author

either waterline/adapter related or node 0.11/0.12 related, i'm almost sure there is a memory leak. because the 1gb used memory doesnt get gc-ed and freed back to the system.(tried on linux and mac) also tried with sails-mongo now and similar issue.

btw although it stays around 200mb with node 0.10 with my use case, it doesnt mean there is no memory leak. it feels like node is limiting there because

  1. it also doesnt get freed till it reaches around 200mb
  2. it is half less memory use when not seeding data.

ps: memory use increases when i seed the data with nested creates

@dmarcelino
Copy link
Member

@mkeremguc, can you please create a load test (you can use this as a base) with nested creates that show evidence of a memory leak? So far we haven't been able to reproduce your findings.

PS: PR #924 (released on v0.10.20) improves waterline's memory consumption.

Update: PR #938 partially reverted #924 so we lost most of the performance improvements.

@filmerjarred
Copy link

Not sure if this is related, but I'm streaming a large json file of users into a Cassandra database through our sails api, and after around 25,000 records the node process caps out at 1.45gb of memory and the whole thing slows to a crawl. The main logic is:

getStream().pipe(es.mapSync(function (data) {
    if(inserted > 20000){
        stream.emit("end");
    }
    pause();
    inserted++;
    Inquiry.create(data).exec(resume);
}));

So after the stream ends after 20,000 the process just sits there with 1gb of ram like a kid with a bag of candy. Confirmed it isn't stream related, as this does the same thing:

function insert(){
    if(inserted < 25000){
        data = {id:inserted};
        inserted++;
        Inquiry.create(data).exec(insert);
    }
}        
insert();

Running Node 0.12, latest stable sails and dtoubelis's sails Cassandra adapter. If there's anything I can do to help, let me know.

@tjwebb
Copy link
Contributor

tjwebb commented May 1, 2015

Have you tested adapters besides cassandra?

@filmerjarred
Copy link

No, doing that now

@tjwebb
Copy link
Contributor

tjwebb commented May 1, 2015

Cool let us know how it turns out.

Ever since node 0.12 we've seen a lot more reports of memory issues. Because node 0.12 seems to use more ram anyway (for reasons I don't quite understand), it's been difficult to distinguish so far between possible issues in sails/waterline vs. node

@filmerjarred
Copy link

Okay, so the the local disk gives out after around 900 inserts with:

function strWrapper(err, written) {
            ^
Error: UNKNOWN, unknown error 'C:\...\.tmp\localDiskDb.db'

Probs just can't take the blast of writes. But the couch adapter get to 20,000 inserts fine, just hovering at 220mb -200bm. After the process finishes it doesn't give it back, but that's likely node's problem. The write speed and the amount of memory being used could be making the difference though, Cass will do around 800 a second, whereas couch will only do 200. Taking a dive into the Cass adapter, will come back if it's got a problem, else might just spawn processes to do 10k writes at a time.

@tjwebb
Copy link
Contributor

tjwebb commented May 1, 2015

Yea the sails-disk adapter is pretty terrible in terms of performance. It's really just designed for development.

Looks like you've cornered the cassandra adapter as the likely culprit. If you don't mind, file an issue in their repository and link it here so we can follow along. Thanks!

@devinivy
Copy link

devinivy commented May 1, 2015

Just chiming in– sails-disk and sails-memory hold all data in memory, so they will always appear to leak.

@sailsbot
Copy link

Thanks for posting, @mkeremguc. I'm a repo bot-- nice to meet you!

It has been 60 days since there have been any updates or new comments on this page. If this issue has been resolved, feel free to disregard the rest of this message. On the other hand, if you are still waiting on a patch, please:

  • review our contribution guide to make sure this submission meets our criteria (only verified bugs with documented features, please; no questions, commentary, or bug reports about undocumented features or unofficial plugins)
  • create a new issue with the latest information, including updated version details with error messages, failing tests, and a link back to the original issue. This allows GitHub to automatically create a back-reference for future visitors arriving from search engines.

Thanks so much for your help!

@mikermcneil
Copy link
Member

For future reference, there's more information and tips on diagnosing memory leaks in Node apps, and on reproducing+reporting suspected memory leaks in Sails here: #3782 (comment)

@balderdashy balderdashy locked and limited conversation to collaborators Aug 9, 2016
@raqem raqem pinned this issue May 15, 2019
@raqem raqem unpinned this issue May 15, 2019
@balderdashy balderdashy unlocked this conversation May 15, 2019
@raqem raqem transferred this issue from balderdashy/waterline May 15, 2019
@balderdashy balderdashy locked as resolved and limited conversation to collaborators May 15, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Development

No branches or pull requests

8 participants