Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

node only using half of the available memory #35573

Open
daveisfera opened this issue Oct 9, 2020 · 11 comments
Open

node only using half of the available memory #35573

daveisfera opened this issue Oct 9, 2020 · 11 comments
Labels
memory Issues and PRs related to the memory management or memory footprint.

Comments

@daveisfera
Copy link

  • Version: 12.19.0
  • Platform: Debian Buster
  • Subsystem:

What steps will reproduce the bug?

  1. Start a container with limited memory (something like docker run --rm -it --memory=1G --memory-swap=1G node:12.19.0-buster-slim bash
  2. Inspect heap size (node -e 'const v8 = require("v8"); console.log(v8.getHeapStatistics());')

How often does it reproduce? Is there a required condition?

100%

What is the expected behavior?

node would use all available memory

What do you see instead?

node is only using half of the memory, so we either need to give the container twice as much memory as we really want it to use (which is problematic) or also specify --max-old-space-size in addition to controlling the container which means we have to keep 2 different settings in sync and that's a big pain

Additional information

It appears that it was changed here, and only using half of memory makes sense when running on a system with multiple process, but when running in a docker container, using all of the memory allocated to the container makes more sense

@ChALkeR ChALkeR added question Issues that look for answers. and removed question Issues that look for answers. labels Oct 9, 2020
@addaleax addaleax added the memory Issues and PRs related to the memory management or memory footprint. label Oct 11, 2020
@addaleax
Copy link
Member

It appears that it was changed here, and only using half of memory makes sense when running on a system with multiple process, but when running in a docker container, using all of the memory allocated to the container makes more sense

Yeah, that seems fair. I’m wondering if there’s a good way to detect whether we’re in such an environment, given that docker containers can also do a lot of different things, including running multiple processes that need to share memory.

@mmomtchev
Copy link
Contributor

I don't think there is a reliable way to know that the Node process is going to be the only one to use the host's memory
But maybe an option along --max-old-space-size=everything can fulfill the requirement

@Flarna
Copy link
Member

Flarna commented Oct 13, 2020

Maybe --max-old-space-size-percentage=90 instead a single everything?

@mmomtchev
Copy link
Contributor

The problem is that --max-old-space-size is a V8 option that is passed directly to V8
--max-old-space-size-percentage=xx will need to be a Node.js option
I see two solutions, either generate a --max-old-space-size=xx option in node.cc:ProcessGlobalArgs() from the new option, either maybe pass the args to environment.cc:SetIsolateCreateParamsForNode() to readjust the amount returned by ResourceConstraints::ConfigureDefaults()

@mikl
Copy link

mikl commented Jan 13, 2021

Yeah, that seems fair. I’m wondering if there’s a good way to detect whether we’re in such an environment, given that docker containers can also do a lot of different things, including running multiple processes that need to share memory.

Running as PID=1 would be a decent metric. No system doing that would survive the process running out of heap space anyway.

Since this is a maximum and not a fixed allocation, processes can still share memory, to a limit.

There’s no really great default here. No matter what the heap limit is set to, a multiple process environment could still overexhaust available memory - three Node.js processes each having their heap limit set to the default 50% of available RAM could do that.

@daveisfera
Copy link
Author

My primary concern (and I believe a common use case) is in an orchestration system (ECS, Kubernetes, etc) where the service is having nodes come up and down based on need. Ideally, the memory available to the process is controlled by the orchestration system, but currently only half the memory is used unless --max-old-space-size is specified and that means that that value has to be kept in sync with the memory assigned by the orchestration system and having two places to specify the same thing is just asking for trouble.
So having a way to specify to use all of the memory instead of just half (like maybe setting the value to 0 or -1) or even being able to set it as a percentage would allow this to be controlled from a single location and avoid potential problems and unused resources.

@mikl
Copy link

mikl commented Jan 13, 2021

I’d still argue that it would be better for Node.js to use all available memory inside Docker by default, since this affects everyone who uses Node with Docker memory limits, and it would be a huge time saver for them to not have to run into this problem and figure out that they need to set --max-old-space-size=-1 (or whatever) in most cases.

@daveisfera
Copy link
Author

I agree completely and would LOVE it to be the default (at least when running in a container), but if there's a concern about it being a "breaking change" or something like that, then I'd at least like to have a way to turn that on.

@jasnell
Copy link
Member

jasnell commented Jan 13, 2021

A key challenge with allocating so much memory to the v8 heap is that there's a good deal of memory allocated in Node.js that is outside of that heap. Also consider that every worker thread has its own v8 heap. It's going to be very difficult to generalize an approach that works consistently.

@daveisfera
Copy link
Author

Agreed that worker threads throw a complication in there, but the current method doesn't really address that concern either and I don't think that the ability to add more complexity through worker threads should prevent the default behavior from being improved and taking advantage of the available resources

@janeklb
Copy link

janeklb commented Jun 29, 2022

... means that that value has to be kept in sync with the memory assigned by the orchestration system and having two places to specify the same thing is just asking for trouble.

One way around this is is to have a base image that asks the orchestration layer for the limit at container startup time (eg. part of a startup script) and dynamically set the --max-old-space-size value accordingly.

(though a better default for PID=1 node processes, or some other standard container environment detection would be even better!)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
memory Issues and PRs related to the memory management or memory footprint.
Projects
None yet
Development

No branches or pull requests

8 participants