Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error R14 (Memory quota exceeded) #8

Open
felipap opened this issue Sep 3, 2014 · 12 comments
Open

Error R14 (Memory quota exceeded) #8

felipap opened this issue Sep 3, 2014 · 12 comments

Comments

@felipap
Copy link

felipap commented Sep 3, 2014

Hi!

Thanks for this great library.
Unfortunately, while trying to use it on Heroku, the dyno is getting killed because it's exceeding the memory quota. This is the exact log:

2014-09-03T19:55:38.416261+00:00 heroku[web.1]: Process running mem=523M(102.2%)
2014-09-03T19:55:38.416525+00:00 heroku[web.1]: Error R14 (Memory quota exceeded)

It works fine for a while (no error logs), and then it starts exceeding the quota in a loop, until I restart Heroku and everything is back to normal for a while.

app[web.1]: info: <anonymous@{removed}>: HTTP HEAD / name=QI, hostname={removed}, pid=7, time=Wed Sep 03 2014 19:49:31 GMT+0000 (UTC)
heroku[router]: at=info method=HEAD path="/" host=www.qilabs.org request_id={removed} fwd={removed} dyno=web.1 connect=2ms service=15ms status=200 bytes=429
heroku[router]: at=info method=HEAD path="/" host=www.qilabs.org request_id={removed} fwd={removed} dyno=web.1 connect=4ms service=14ms status=200 bytes=428
app[web.1]: info: <anonymous@{removed}>: HTTP HEAD / name=QI, hostname={removed}, pid=6, time=Wed Sep 03 2014 19:49:31 GMT+0000 (UTC)
app[web.1]: info: <anonymous@{removed}>: HTTP HEAD / name=QI, hostname={removed}, pid=5, time=Wed Sep 03 2014 19:49:35 GMT+0000 (UTC)
heroku[router]: at=info method=HEAD path="/" host=www.qilabs.org request_id=39f79216-d71f-40c0-8a59-464325425c1e fwd={removed} dyno=web.1 connect=2ms service=261ms status=200 bytes=429
heroku[web.1]: Process running mem=512M(100.1%)
heroku[web.1]: Error R14 (Memory quota exceeded)
heroku[web.1]: Process running mem=512M(100.1%)
heroku[web.1]: Error R14 (Memory quota exceeded)
heroku[router]: at=info method=HEAD path="/" host=www.qilabs.org request_id={removed} fwd={removed} dyno=web.1 connect=1ms service=8ms status=200 bytes=430
app[web.1]: info: <anonymous@{removed}>: HTTP HEAD / name=QI, hostname={removed}, pid=4, time=Wed Sep 03 2014 19:50:12 GMT+0000 (UTC)
heroku[router]: at=info method=HEAD path="/" host=www.qilabs.org request_id={removed} fwd={removed} dyno=web.1 connect=0ms service=10ms status=200 bytes=430
app[web.1]: info: <anonymous@{removed}>: HTTP HEAD / name=QI, hostname={removed}, pid=4, time=Wed Sep 03 2014 19:50:24 GMT+0000 (UTC)
heroku[web.1]: Process running mem=512M(100.1%)
heroku[web.1]: Error R14 (Memory quota exceeded)
heroku[router]: at=info method=HEAD path="/" host=www.qilabs.org request_id=261f26a7-e5e4-4fba-81a6-ebf427e1257b fwd={removed} dyno=web.1 connect=1ms service=14ms status=200 bytes=429
app[web.1]: info: <anonymous@{removed}>: HTTP HEAD / name=QI, hostname={removed}, pid=4, time=Wed Sep 03 2014 19:50:34 GMT+0000 (UTC)
heroku[web.1]: Process running mem=513M(100.3%)
heroku[web.1]: Error R14 (Memory quota exceeded)
heroku[web.1]: Process running mem=513M(100.3%)
heroku[web.1]: Error R14 (Memory quota exceeded)
heroku[web.1]: Process running mem=514M(100.5%)
heroku[web.1]: Error R14 (Memory quota exceeded)
heroku[web.1]: Process running mem=514M(100.5%)
heroku[web.1]: Error R14 (Memory quota exceeded)
heroku[web.1]: Process running mem=514M(100.6%)
heroku[web.1]: Error R14 (Memory quota exceeded)
(and repeat)

(the HEAD requests are newrelic's)

I seem to be following all the steps as indicated in the README. Perhaps the memory leaks are my fault, but I've never seen these messages before using forky. I'll try not using the cluster for a while, and see if the errors still occur.

Thoughts, anyone?

@brianc
Copy link
Owner

brianc commented Sep 5, 2014

Yikes - yes please lemme know if the memory leaks go away when you're not using the cluster. I've used this for a long time in production without problems, but there could still be memory leaks I suppose! I'll stay tuned...

@felipap
Copy link
Author

felipap commented Sep 6, 2014

Hi, Brian.
Thanks for answering. I've been running without the cluster since then and
had no memory issues so far.
Does that mean it's a forky issue?
On 5 Sep 2014 12:19, "Brian C" notifications@github.com wrote:

Yikes - yes please lemme know if the memory leaks go away when you're not
using the cluster. I've used this for a long time in production without
problems, but there could still be memory leaks I suppose! I'll stay
tuned...


Reply to this email directly or view it on GitHub
#8 (comment).

@joshmosh
Copy link

This is not a forky issue. Unless you're using the latest io.js you're most likely using an unstable version of cluster.

https://nodejs.org/api/cluster.html#cluster_cluster

@HeathNaylor
Copy link

I am having forky max memory and swap as well, it just creates more and more threads that take a memory chunk until the server crashes.

@marclar
Copy link

marclar commented Jan 3, 2016

Yes -- same thing happening to me. Any thoughts? Would it help to reduce the # of workers forky creates?

@HeathNaylor
Copy link

@marclar I just pulled out the Forky layer and spanned the application across multiple nodes with the database decoupled. Sorry it isn't a true solution but it solves the problem in some form.

@marclar
Copy link

marclar commented Jan 3, 2016

As in, you implemented your own solution with the cluster module? Not sure
what you mean by spanning across multiple nodes...
On Jan 2, 2016 11:54 PM, "Heath Naylor" notifications@github.com wrote:

@marclar https://github.com/marclar I just pulled out the Forky layer
and spanned the application across multiple nodes with the database
decoupled. Sorry it isn't a true solution but it solves the problem in some
form.


Reply to this email directly or view it on GitHub
#8 (comment).

@HeathNaylor
Copy link

I just have the same application running on several servers with a load balancer distributing load across them. If I were writing a nodejs application from scratch I might look into forky more seriously but what I have is an inherited project with limited budget.

@marclar
Copy link

marclar commented Jan 6, 2016

@aseemk
Copy link

aseemk commented Sep 30, 2016

Hello folks. I (@fiftythree) have also been running Forky in production for years now with no issue. I just wanted to chime in with one simple possible explanation:

Forky launches multiple Node processes on the same dyno. That means a single dyno will use more memory than it would without Forky. Specifically, if your Node process uses ~100 MB on its own, Forky running 10 workers will use ~1 GB.

So instead of a Forky memory leak, it's probably simply the case that you're just running more workers than your dyno's memory capacity. Reducing the number of workers, or bumping up your dyno size, will both probably fix it. And you can monitor your memory usage via Heroku Metrics.

If it helps, Heroku provides some helper configs to let you dynamically figure out the optimal number of workers to spawn based on your dyno size.

https://devcenter.heroku.com/articles/node-concurrency

Hope this helps!

@aseemk
Copy link

aseemk commented Sep 30, 2016

Here's some more specific things we do, if that helps:

  • We heroku config:set WEB_MEMORY to the MB we expect one instance of our app's process to typically take (or grow to). E.g. we set it to 512.
  • Then at runtime, Heroku sets WEB_CONCURRENCY to the number of processes we can safely run per dyno. So on a dyno with 1 GB of RAM, it'll set this to 2.
  • So then you can pass process.env.WEB_CONCURRENCY to Forky directly, but we also Math.max it to a reasonable number.
  • The "reasonable number" we use is os.cpus().length * 2, i.e. at most 2 processes per core.

Hope this helps also!

@marclar
Copy link

marclar commented Sep 30, 2016

A gentleman and a scholar, @aseemk ;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants