Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extreme virtual memory usage #363

Closed
bataras opened this issue Sep 23, 2014 · 3 comments
Closed

Extreme virtual memory usage #363

bataras opened this issue Sep 23, 2014 · 3 comments

Comments

@bataras
Copy link

bataras commented Sep 23, 2014

I've been running a 5 node consul installation for about a month in a dev/test environment. 3 nodes are running as a server and 2 are running as clients. Running consul v0.4.0

The server nodes show an extremely high VIRT memory consumption, while the clients look fine.

Consul itself seems to be fine. The UI, services, k/v etc are working.

Here's a few lines from top...

top - 21:38:14 up 9 days, 18:58,  1 user,  load average: 0.00, 0.01, 0.05
Tasks: 182 total,   2 running, 180 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.3 us,  0.3 sy,  0.0 ni, 99.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.3 st
KiB Mem:   3854780 total,  2513628 used,  1341152 free,   397772 buffers
KiB Swap:        0 total,        0 used,        0 free.  1559008 cached Mem

  PID USER      PR  NI    VIRT    RES    SHR S %CPU %MEM     TIME+ COMMAND
15195 root      20   0 40.502g  40644  12912 S  1.0  1.1  53:01.48 consul

Note the 402GB VIRT usage
Note also, I'm running consul via a docker container as follows...

docker run --name consul -h $HOSTNAME  \
    -p 10.0.1.1:8300:8300 \
    -p 10.0.1.1:8301:8301 \
    -p 10.0.1.1:8301:8301/udp \
    -p 10.0.1.1:8302:8302 \
    -p 10.0.1.1:8302:8302/udp \
    -p 10.0.1.1:8400:8400 \
    -p 10.0.1.1:8500:8500 \
    -p 172.17.42.1:53:53/udp \
    -d -v /mnt:/data \
    progrium/consul -server -advertise 10.0.1.1 -bootstrap-expect 3
@armon
Copy link
Member

armon commented Sep 23, 2014

@bataras This is actually expected behavior. We use LMDB for our storage engine, and it relies on doing large mmap()'s internally. Thus our baseline virtual memory usage is about 40GB, plus an additional few GB caused by the Golang runtime.

This is not a concert, since the amount that is resident is not that high. In this case, only 40MB is resident which is reasonable for the server nodes.

Closing, since this is expected.

@armon armon closed this as completed Sep 23, 2014
@bataras
Copy link
Author

bataras commented Sep 23, 2014

Ok cool. I looked around before filing the issue (ie googled consul memory etc). Sorry for the noise

@armon
Copy link
Member

armon commented Sep 23, 2014

No worries, I'm actually re-opening this to add to an FAQ.

@armon armon reopened this Sep 23, 2014
@armon armon closed this as completed in e7326d0 Oct 14, 2014
duckhan pushed a commit to duckhan/consul that referenced this issue Oct 24, 2021
* Health checks base (hashicorp#333)

Add the health checks controller to connect-inject for service-mesh.

Co-authored-by: Ashwin Venkatesh <ashwin@hashicorp.com>
Co-authored-by: Iryna Shustava <ishustava@users.noreply.github.com>
Co-authored-by: Luke Kysow <1034429+lkysow@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants