Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Too many memory used while startup nebula-studio with container on server with lot's CPU core #22

Closed
matrixji opened this issue Oct 2, 2021 · 1 comment

Comments

@matrixji
Copy link
Contributor

matrixji commented Oct 2, 2021

Describe the bug (must be provided)

I using Kubernetes to start nebula-studio, and set limits with 2G memory, it always got OOM Killed on startup.
I then check with docker-compose way, it starts many node worker processes (depends how many CPU core's we have), as I using a server with 72cores, and each node work process may need about 50MB memory, so totally, it requires about 3.6GB memory, then I increase the memory limits in Kubernetes deployment, it's OK.

Your Environments (must be provided)

  • OS: Linux
  • Node-version: N/A using official container
  • Studio-version: official container image: vesoft/nebula-graph-studio:v3

How To Reproduce(must be provided)

Steps to reproduce the behavior:

It should be clear in the above description.

Expected behavior

I think at least for a startup there is no need for so many worker processes.
Maybe a suitable startup configuration --workers=3 like a non-container startup is a simple solution.

Contents in file: package.json

{
"start": "egg-scripts start --daemon --title=egg-server-nebula-graph-studio --workers=3",
"docker-start": "egg-scripts start --title=egg-server-nebula-graph-studio"
}

And if possible, we'd better calculate the worker's number, by checking below things under container:

  • How many CPUs does the node has, from nproc
  • How many logical CPUs avaliable from cgroups, checking:
    • /sys/fs/cgroup/cpu/cpu.cfs_period_us
    • /sys/fs/cgroup/cpu/cpu.cfs_quota_us
  • How many memory available from cgroups, checking: /sys/fs/cgroup/memory/memory.limit_in_bytes

Additional context

N/A

matrixji added a commit to matrixji/nebula-studio that referenced this issue Oct 2, 2021
Just like normal startup, simple using --workers=3 for startup under docker.
This for less memory requirememnts while host have lots CPU cores.

issue: vesoft-inc#22

Signed-off-by: Ji Bin <matrixji@live.com>
matrixji added a commit to matrixji/nebula-studio that referenced this issue Oct 2, 2021
Just like normal startup, simply using --workers=3 for startup under docker.
This for less memory required while host has lots CPU cores.

issue: vesoft-inc#22

Signed-off-by: Ji Bin <matrixji@live.com>
hetao92 pushed a commit that referenced this issue Oct 12, 2021
Just like normal startup, simply using --workers=3 for startup under docker.
This for less memory required while host has lots CPU cores.

issue: #22

Signed-off-by: Ji Bin <matrixji@live.com>
@hetao92
Copy link
Contributor

hetao92 commented Oct 12, 2021

merged

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants