-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
possible memory leak #5568
Comments
The question is, does it come back down eventually? sails might open a lot of connections with postgres and do all kinds of stuff that uses RAM. (btw, I've seen people complaining in the iojs gitter that node 0.12 uses a lot more RAM than 0.10) I guess I'm looking for some more detailed measurements |
it doesn't come back and stays in "active" memory of system. few more tests: so now we suspect it is nodejs issue or should it still get back and stay around 120MB like without any data seed? |
I would be very interested to know if there's a leak here. My suggestion,
|
Yeah if theres a leak we need to track it down but I haven't had it come up yet in production environments. Mind posting your bootstrap file here so we can see if it's anything blocking or leaking there? Don't use |
my bootstrap file is just a bunch of findOrCreates just like i posted. i use sails-postgresql. btw, for you who want replicate the issue:
|
Node 0.11 is definitely consuming more memory than 0.10. Below are the results of some load tests I ran on Travis (taken from https://github.com/balderdashy/sails-postgresql/issues/141#issuecomment-88208132):
|
Node 0.11 is edge, right? We probably want to know about 0.10 and 0.12. |
Node 0.11 and 0.12, and iojs are known to consume more RAM at idle than 0.10 and earlier. I wouldn't necessarily attribute this to a problem in sails or waterline. |
it is not about consuming more ram at idle. memory usage rises up fast when you do db actions and it doesn't free back... like a memory leak |
can you guys post output of |
Ah ok, sorry I missed that. In general I'm really not interested in what |
Node 0.12, sails-memory, load test results (idle is measured 1s after the last Hitting the db with 100 requests...
0: rss: 39.43 MB, heapTotal: 31.60 MB, heapUsed: 11.97 MB . . . . . . . . .
10: rss: 236.66 MB, heapTotal: 223.79 MB, heapUsed: 152.70 MB . . . . . . . . .
20: rss: 236.66 MB, heapTotal: 223.79 MB, heapUsed: 152.72 MB . . . . . . . . .
30: rss: 236.66 MB, heapTotal: 223.79 MB, heapUsed: 152.73 MB . . . . . . . . .
40: rss: 236.67 MB, heapTotal: 223.79 MB, heapUsed: 152.81 MB . . . . . . . . .
50: rss: 236.67 MB, heapTotal: 223.79 MB, heapUsed: 152.82 MB . . . . . . . . .
60: rss: 236.67 MB, heapTotal: 223.79 MB, heapUsed: 152.83 MB . . . . . . . . .
70: rss: 236.67 MB, heapTotal: 223.79 MB, heapUsed: 152.83 MB . . . . . . . . .
80: rss: 236.67 MB, heapTotal: 223.79 MB, heapUsed: 152.84 MB . . . . . . . . .
90: rss: 236.67 MB, heapTotal: 223.79 MB, heapUsed: 152.85 MB . . . . . . . . .
100: rss: 236.69 MB, heapTotal: 223.79 MB, heapUsed: 152.86 MB
time elapsed: 12060ms
idle: rss: 236.67 MB, heapTotal: 223.79 MB, heapUsed: 152.92 MB https://travis-ci.org/balderdashy/waterline-adapter-tests/jobs/56787563#L604 |
Node 0.10, sails-memory, load test results (idle is measured 1s after the last .find()): Hitting the db with 100 requests...
0: rss: 35.76 MB, heapTotal: 28.06 MB, heapUsed: 10.54 MB . . . . . . . . .
10: rss: 137.54 MB, heapTotal: 129.42 MB, heapUsed: 99.08 MB . . . . . . . . .
20: rss: 151.48 MB, heapTotal: 141.76 MB, heapUsed: 110.03 MB . . . . . . . . .
30: rss: 153.74 MB, heapTotal: 143.81 MB, heapUsed: 105.53 MB . . . . . . . . .
40: rss: 158.10 MB, heapTotal: 148.97 MB, heapUsed: 116.46 MB . . . . . . . . .
50: rss: 170.33 MB, heapTotal: 161.33 MB, heapUsed: 127.39 MB . . . . . . . . .
60: rss: 154.10 MB, heapTotal: 144.83 MB, heapUsed: 68.52 MB . . . . . . . . .
70: rss: 154.98 MB, heapTotal: 144.83 MB, heapUsed: 79.55 MB . . . . . . . . .
80: rss: 132.77 MB, heapTotal: 122.13 MB, heapUsed: 39.78 MB . . . . . . . . .
90: rss: 105.54 MB, heapTotal: 95.35 MB, heapUsed: 32.93 MB . . . . . . . . .
100: rss: 87.71 MB, heapTotal: 77.81 MB, heapUsed: 27.38 MB
time elapsed: 12429ms
idle: rss: 87.71 MB, heapTotal: 77.81 MB, heapUsed: 27.41 MB https://travis-ci.org/balderdashy/waterline-adapter-tests/jobs/56787565#L633 |
Not only node 0.12 consumes more memory it also has some issues letting it go. You can click the links to see results for all adapters. |
Spinning up the adapter query and express middleware apparatus for the first request, binding functions/contexts, and caching stuff will use RAM. Looks like 95% of the ram increase occurs between requests 0 and 1. After that, it's steady, or decreasing. To me, those numbers look perfectly normal. |
either waterline/adapter related or node 0.11/0.12 related, i'm almost sure there is a memory leak. because the 1gb used memory doesnt get gc-ed and freed back to the system.(tried on linux and mac) also tried with sails-mongo now and similar issue. btw although it stays around 200mb with node 0.10 with my use case, it doesnt mean there is no memory leak. it feels like node is limiting there because
ps: memory use increases when i seed the data with nested creates |
@mkeremguc, can you please create a load test (you can use this as a base) with nested creates that show evidence of a memory leak? So far we haven't been able to reproduce your findings. PS: PR #924 (released on v0.10.20) improves waterline's memory consumption. Update: PR #938 partially reverted #924 so we lost most of the performance improvements. |
Not sure if this is related, but I'm streaming a large json file of users into a Cassandra database through our sails api, and after around 25,000 records the node process caps out at 1.45gb of memory and the whole thing slows to a crawl. The main logic is:
So after the stream ends after 20,000 the process just sits there with 1gb of ram like a kid with a bag of candy. Confirmed it isn't stream related, as this does the same thing:
Running Node 0.12, latest stable sails and dtoubelis's sails Cassandra adapter. If there's anything I can do to help, let me know. |
Have you tested adapters besides cassandra? |
No, doing that now |
Cool let us know how it turns out. Ever since node 0.12 we've seen a lot more reports of memory issues. Because node 0.12 seems to use more ram anyway (for reasons I don't quite understand), it's been difficult to distinguish so far between possible issues in sails/waterline vs. node |
Okay, so the the local disk gives out after around 900 inserts with:
Probs just can't take the blast of writes. But the couch adapter get to 20,000 inserts fine, just hovering at 220mb -200bm. After the process finishes it doesn't give it back, but that's likely node's problem. The write speed and the amount of memory being used could be making the difference though, Cass will do around 800 a second, whereas couch will only do 200. Taking a dive into the Cass adapter, will come back if it's got a problem, else might just spawn processes to do 10k writes at a time. |
Yea the sails-disk adapter is pretty terrible in terms of performance. It's really just designed for development. Looks like you've cornered the cassandra adapter as the likely culprit. If you don't mind, file an issue in their repository and link it here so we can follow along. Thanks! |
Just chiming in– sails-disk and sails-memory hold all data in memory, so they will always appear to leak. |
Thanks for posting, @mkeremguc. I'm a repo bot-- nice to meet you! It has been 60 days since there have been any updates or new comments on this page. If this issue has been resolved, feel free to disregard the rest of this message. On the other hand, if you are still waiting on a patch, please:
Thanks so much for your help! |
For future reference, there's more information and tips on diagnosing memory leaks in Node apps, and on reproducing+reporting suspected memory leaks in Sails here: #3782 (comment) |
using sails' config/bootstrap.js to seed some data like:
depending the amount of data i seed there, the memory usage rises and i suspect a big memory leak. without any seed, sails stays around 120MB idle. when ~200 inserts, it goes up to 210MB. with more data, although the data seeded is around 4-5MB, i even see it goes more than 1GB and stays there on idle.
using sails-posgtresql as adapter. pm2 as process manager.
what i tried so far:
-parallelly inserting all the records and call bootstrap's cb when done with async library
-using bluebird instead of async
-create() instead of findOrCreate()
-using sails-disk as db
-not using pm2 and lifting directly
all same.. with my current seed data, memory usage goes over 1GB.
i don't know if this is related waterline or the adapter(tho, i tried both postgre and disk) or if bootstrap is not the place for seeding data ?.
any ideas ?
edit: node-v0.12.0, sails-v0.11.0, sails-postgresql-v0.10.14
edit: node-v0.11.16 - same result
node-v0.10.36 - dramatically reduced to ~200MB of RAM usage.
The text was updated successfully, but these errors were encountered: