Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sails memory leak using sails-postgresql adapter and model.find #4313

Open
paul23-git opened this issue Feb 9, 2018 · 10 comments
Open

Sails memory leak using sails-postgresql adapter and model.find #4313

paul23-git opened this issue Feb 9, 2018 · 10 comments
Assignees

Comments

@paul23-git
Copy link

paul23-git commented Feb 9, 2018

Sails version: 1.0
Node version: 8.9.0
NPM version: 5.5.1
DB adapter name: sails-postrgesql
DB adapter version: 1.0.0-12
Operating system: windows 10


Well this is a repost of an issue posted on stackoverflow. https://stackoverflow.com/questions/48698000/does-v8-node-actually-garbage-collect-during-function-calls-or-is-this-a-sail

SailsJS seems to leak when repeated calling model.find({}). While the code uses populate, without populate it still "leaks" - though much less and it just takes more time to crash.

Github minimal example can be found at: https://github.com/pulli23/memory-leak-sails

To reproduce application must be run with the following options:

I run with the flag: --max-old-space-size=80
Environment variable: node_env=production

Then if using a POST request to localhost:1337/start-runner the runner should start (as found in WorkerController.DoRun() ). - It connects to a postgresql database (database information redacted for obvious reasons).
At my personal setup it runs about 2500 times, each time finding roughly 2000 elements.
After that node will crash reporting heap memory erro.

Increasing the max-old-space-size also dramatically increases the amount of runs, however it still crashes after "some" time.

Using a mock method (as is commented out in the github) it will not crash. And the program keeps running without problem.

The combination of those: Consistent crashing at about the same runcount when calling model.find({}) and not crashing when not using a mock that creates the same result, leads me to believe there's a memory leak in sails or the adapter.

@sailsbot
Copy link

sailsbot commented Feb 9, 2018

@pulli23 Thanks for posting, we'll take a look as soon as possible.


For help with questions about Sails, click here. If you’re interested in hiring @sailsbot and her minions in Austin, click here.

@paul23-git paul23-git changed the title Sails Sails memory leak using sails-postgresql adapter and model.find Feb 9, 2018
@sailsbot
Copy link

@pulli23,@sailsbot: Hello, I'm a repo bot-- nice to meet you!

It has been 30 days since there have been any updates or new comments on this page. If this issue has been resolved, feel free to disregard the rest of this message and simply close the issue if possible. On the other hand, if you are still waiting on a patch, please post a comment to keep the thread alive (with any new information you can provide).

If no further activity occurs on this thread within the next 3 days, the issue will automatically be closed.

Thanks so much for your help!

@sailsbot sailsbot added the waiting to close This label is deprecated. Please don't use it anymore. label Mar 12, 2018
@paul23-git
Copy link
Author

I can't provide more information without any input of a developper. - I'd like to see some activity from your side.

@sailsbot sailsbot removed the waiting to close This label is deprecated. Please don't use it anymore. label Mar 14, 2018
@wulfsolter
Copy link
Contributor

I too have seen this, https://github.com/balderdashy/waterline/issues/1384 was my GitHub issue a while back. I've not solved it yet, I just run 4 threads (4 core CPU) in production with PM2 restarting the threads as they die every few days, which is usually less often than deployments restarting the threads.

@mikermcneil
Copy link
Member

mikermcneil commented Mar 25, 2018

@pulli23 @wulfsolter apologies for the delay guys, been busy.

Would you try the recent v1.0.0-13 release of sails-postgresql? That ostensibly solves the problem using the same approach that solved a similar issue for MySQL.

@mikermcneil
Copy link
Member

@wulfsolter actually, just remembered you posted #4264 (comment), and looks like that was slightly after. Hopefully that means the issue is still fixed for you as well, but if not, please keep me apprised

@wulfsolter
Copy link
Contributor

screenshot 95

I think the Model.find() leak is still around.... we're running a read heavy application (~3M rows being read only writing ~5000 daily) but it works, we have deploys more often than running out of RAM and PM2 simply restarts a thread @ 4GB. Sure we lose a few requests, but with most of our clients having less than perfect connectivity our RequestManager() is pretty robust.

Small problems compared to a beautiful framework that makes our lives so much easier :D And there were certainly a few 'death glares' when I did a pres on database migration of MySQL+Mongo -> Postgres with no downtime thanks to the great ORM :D https://wulfsolter.github.io/presentation-mongoToPostgre/#/

@mikermcneil
Copy link
Member

@wulfsolter haha no doubt, thanks for the link!

So just to clarify: the valleys in that chart are where PM2 is restarting the process due to a process crash from running out of heap, and not where the garbage collector runs?

@wulfsolter
Copy link
Contributor

Correct, valleys are where process restarts - either due to running out of heap or deployments.

@mikermcneil mikermcneil self-assigned this Apr 5, 2018
@vidueirof
Copy link

Hi I'm using mongo and I'm having same issue.

I'm using locust to load my api and I can verify that used memory always increase. I'm just doing a simple Model.find()

Help will be appreciated as my app is productive at this moment and my containers are crashing after some time.
Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests

6 participants