Node ignoring --max-old-space-size #6153

Closed
mnhan3 opened this Issue Feb 8, 2016 · 40 comments

Comments

Projects
None yet
@mnhan3

mnhan3 commented Feb 8, 2016

top output (kibana started with max-old-space-sze=200 along with gc tracing)

60742 elastics  20   0 3607m 1.8g 114m S  0.0 47.2  15:06.82 /usr/bin/java -Xms1500M -Xmx1500M -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSw 
17468 kibana    20   0 1622m 751m 9268 S  0.7 19.7   0:45.33 /opt/kibana/bin/../node/bin/node --expose-gc --trace-gc --trace-gc-verbose --max-old-space-siz 

Nothing is accessing Kibana. There's only 1 index in elasticsearch. Kibana node process continues to eat memory until it gets killed by OOM.

stdout of gc tracing shows:

[17468]  4999994 ms: Scavenge 180.9 (223.5) -> 171.9 (226.5) MB, 14.7 ms (+ 55.2 ms in 188 steps since last GC) [allocation failure].
[17468] Memory allocator,   used: 231908 KB, available:  38428 KB
[17468] New space,          used:   3477 KB, available:  12906 KB, committed:  32768 KB
[17468] Old pointers,       used: 123574 KB, available:     35 KB, committed: 123841 KB
[17468] Old data space,     used:  37628 KB, available:    815 KB, committed:  38486 KB
[17468] Code space,         used:   9254 KB, available:  12442 KB, committed:  21912 KB
[17468] Map space,          used:   1971 KB, available:   6015 KB, committed:   8190 KB
[17468] Cell space,         used:     83 KB, available:     20 KB, committed:    128 KB
[17468] PropertyCell space, used:     56 KB, available:      7 KB, committed:     64 KB
[17468] Large object space, used:      0 KB, available:  37387 KB, committed:      0 KB
[17468] All spaces,         used: 176045 KB, available:  32243 KB, committed: 225390 KB
[17468] External memory reported:   -385 KB
[17468] Total time spent in GC  : 757.7 ms
[17468]  5142419 ms: Scavenge 184.6 (227.5) -> 175.6 (229.5) MB, 14.7 ms (+ 53.7 ms in 188 steps since last GC) [allocation failure].
[17468] Memory allocator,   used: 234980 KB, available:  35356 KB
[17468] New space,          used:   3489 KB, available:  12894 KB, committed:  32768 KB
[17468] Old pointers,       used: 126653 KB, available:      0 KB, committed: 126864 KB
[17468] Old data space,     used:  38213 KB, available:    229 KB, committed:  38486 KB
[17468] Code space,         used:   9289 KB, available:  12404 KB, committed:  21912 KB
[17468] Map space,          used:   2039 KB, available:   5947 KB, committed:   8190 KB
[17468] Cell space,         used:     83 KB, available:     20 KB, committed:    128 KB
[17468] PropertyCell space, used:     56 KB, available:      7 KB, committed:     64 KB
[17468] Large object space, used:      0 KB, available:  34315 KB, committed:      0 KB
[17468] All spaces,         used: 179825 KB, available:  31503 KB, committed: 228413 KB
[17468] External memory reported:   -398 KB
[17468] Total time spent in GC  : 772.4 ms
[17468]  5284865 ms: Scavenge 188.3 (230.5) -> 179.3 (234.5) MB, 14.1 ms (+ 55.1 ms in 188 steps since last GC) [allocation failure].
[17468] Memory allocator,   used: 240100 KB, available:  30236 KB
[17468] New space,          used:   3487 KB, available:  12896 KB, committed:  32768 KB
[17468] Old pointers,       used: 129744 KB, available:    943 KB, committed: 130895 KB
[17468] Old data space,     used:  38799 KB, available:    613 KB, committed:  39494 KB
[17468] Code space,         used:   9329 KB, available:  12362 KB, committed:  21912 KB
[17468] Map space,          used:   2109 KB, available:   5872 KB, committed:   8190 KB
[17468] Cell space,         used:     83 KB, available:     20 KB, committed:    128 KB
[17468] PropertyCell space, used:     56 KB, available:      7 KB, committed:     64 KB
[17468] Large object space, used:      0 KB, available:  29195 KB, committed:      0 KB
[17468] All spaces,         used: 183610 KB, available:  32715 KB, committed: 233452 KB
[17468] External memory reported:   -412 KB
[17468] Total time spent in GC  : 786.5 ms
[17468] Speed up marking after 1024 steps

This is a RHEL 6 instance on vmware. It has 4gigs of ram allocated. Its running ES 2.2 and kibana 4.4 behind a web ssl proxy. How can I get Kibana to stop running the box out of ram. Already using lastest kibana release and using the old data flag.

Mike

@rashidkpc rashidkpc closed this Feb 8, 2016

@rashidkpc rashidkpc reopened this Feb 8, 2016

@rashidkpc

This comment has been minimized.

Show comment
Hide comment
@rashidkpc

rashidkpc Feb 8, 2016

Member

Just re-read your message, can you post the entirety of the top output? It cuts off at --max-old-space-siz

Member

rashidkpc commented Feb 8, 2016

Just re-read your message, can you post the entirety of the top output? It cuts off at --max-old-space-siz

@mnhan3

This comment has been minimized.

Show comment
Hide comment
@mnhan3

mnhan3 Feb 8, 2016

Here's the full command via ps aux.

ps aux|grep node
kibana   17468  0.8 21.7 1766484 852536 pts/0  Sl   13:20   0:52 /opt/kibana/bin/../node/bin/node --expose-gc --trace-gc --trace-gc-verbose --max-old-space-size=200 /opt/kibana/bin/../src/cli

mnhan3 commented Feb 8, 2016

Here's the full command via ps aux.

ps aux|grep node
kibana   17468  0.8 21.7 1766484 852536 pts/0  Sl   13:20   0:52 /opt/kibana/bin/../node/bin/node --expose-gc --trace-gc --trace-gc-verbose --max-old-space-size=200 /opt/kibana/bin/../src/cli

@rashidkpc rashidkpc changed the title from kibana 4.4 memory leak to Node ignoring --max-old-space-size Feb 8, 2016

@rashidkpc

This comment has been minimized.

Show comment
Hide comment
@rashidkpc

rashidkpc Feb 8, 2016

Member

Changed the title, haven't seen this happen anywhere, will try to replicate

Member

rashidkpc commented Feb 8, 2016

Changed the title, haven't seen this happen anywhere, will try to replicate

@rashidkpc

This comment has been minimized.

Show comment
Hide comment
@rashidkpc

rashidkpc Feb 8, 2016

Member

Is this happening when Kibana is just sitting? Are you installing any plugins? Optimize running?

Member

rashidkpc commented Feb 8, 2016

Is this happening when Kibana is just sitting? Are you installing any plugins? Optimize running?

@mnhan3

This comment has been minimized.

Show comment
Hide comment
@mnhan3

mnhan3 Feb 9, 2016

Kibana is just sitting. No Dashboard created. No plugins installed. I haven't optimize anything in kibana. I just added the ES index pattern to kibana at the settings when it first came up. After restarting kibana, not even accessing it on browser and memory is growing. The only optimization i remember doing was with curator, i told it to optimize indexes on ES. Would that cause kibana to memory leak?

mnhan3 commented Feb 9, 2016

Kibana is just sitting. No Dashboard created. No plugins installed. I haven't optimize anything in kibana. I just added the ES index pattern to kibana at the settings when it first came up. After restarting kibana, not even accessing it on browser and memory is growing. The only optimization i remember doing was with curator, i told it to optimize indexes on ES. Would that cause kibana to memory leak?

@rashidkpc

This comment has been minimized.

Show comment
Hide comment
@rashidkpc

rashidkpc Feb 9, 2016

Member

Kibana has a built in code optimizing that is separate from the elasticsearch _optimize operation. It can possible cause memory to increase temporarily. We're looking into this.

Member

rashidkpc commented Feb 9, 2016

Kibana has a built in code optimizing that is separate from the elasticsearch _optimize operation. It can possible cause memory to increase temporarily. We're looking into this.

@mnhan3

This comment has been minimized.

Show comment
Hide comment
@mnhan3

mnhan3 Feb 9, 2016

do you need any logs from my installation? Anything you want me to test/try? Currently kibana hasn't killed my vm with setting old-space-size=200M. It using 1gig of memory but haven't killed it like when I had it running default with nothing set. It looks like it stabilized at 1gig, but that's still huge amount of memory to be holding idle.

60742 elastics  20   0 3607m 1.8g 114m S  0.3 47.3  20:22.77 /usr/bin/java -Xms1500M -Xmx1500M -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSw
17468 kibana    20   0 1900m 1.0g 6916 S  0.7 27.7  14:18.74 /opt/kibana/bin/../node/bin/node --expose-gc --trace-gc --trace-gc-verbose --max-old-space-siz

from my messages log, oom killed kibana with default setting at around 1.8+g ram used when i originally ran it with no node_options.

Feb  6 04:35:27 elks01 kernel: [60742]   498 60742   921715   454038   0       0             0 java 
Feb  6 04:35:27 elks01 kernel: [60857] 19538 60857  1898775   424612   0       0             0 node 
--snip-
Feb  6 04:35:27 elks01 kernel: Out of memory: Kill process 60857 (node) score 710 or sacrifice child
Feb  6 04:35:27 elks01 kernel: Killed process 60857, UID 19538, (node) total-vm:7595100kB, anon-rss:1696804kB, file-rss:1644kB

Update:
nevermind i spoke too soon, started to slowly grow again. Sort of oscillating between 1.0 and 1.1 gigs of ram. Access kibana a few times, doesn't seem to make it use any additional memory.

mnhan3 commented Feb 9, 2016

do you need any logs from my installation? Anything you want me to test/try? Currently kibana hasn't killed my vm with setting old-space-size=200M. It using 1gig of memory but haven't killed it like when I had it running default with nothing set. It looks like it stabilized at 1gig, but that's still huge amount of memory to be holding idle.

60742 elastics  20   0 3607m 1.8g 114m S  0.3 47.3  20:22.77 /usr/bin/java -Xms1500M -Xmx1500M -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSw
17468 kibana    20   0 1900m 1.0g 6916 S  0.7 27.7  14:18.74 /opt/kibana/bin/../node/bin/node --expose-gc --trace-gc --trace-gc-verbose --max-old-space-siz

from my messages log, oom killed kibana with default setting at around 1.8+g ram used when i originally ran it with no node_options.

Feb  6 04:35:27 elks01 kernel: [60742]   498 60742   921715   454038   0       0             0 java 
Feb  6 04:35:27 elks01 kernel: [60857] 19538 60857  1898775   424612   0       0             0 node 
--snip-
Feb  6 04:35:27 elks01 kernel: Out of memory: Kill process 60857 (node) score 710 or sacrifice child
Feb  6 04:35:27 elks01 kernel: Killed process 60857, UID 19538, (node) total-vm:7595100kB, anon-rss:1696804kB, file-rss:1644kB

Update:
nevermind i spoke too soon, started to slowly grow again. Sort of oscillating between 1.0 and 1.1 gigs of ram. Access kibana a few times, doesn't seem to make it use any additional memory.

@jdmchone

This comment has been minimized.

Show comment
Hide comment
@jdmchone

jdmchone Feb 11, 2016

Any update on this issue? We're experiencing the same thing

Any update on this issue? We're experiencing the same thing

@rashidkpc

This comment has been minimized.

Show comment
Hide comment
@rashidkpc

rashidkpc Feb 12, 2016

Member

We've been trying to reproduce this, we've tried with a variety of settings and aren't seeing any memory increase over time even when setting --max-old-space-sze=200. Here's a chart after applying that on our internal dashboarding system

$ ps aux|grep node
rashidk+  2042  2.1  7.5 1222400 304876 ?      Ssl  Feb11  31:21 /opt/kibana/bin/../node/bin/node --max-old-space-size=200 /opt/kibana/bin/../src/cli

screen shot 2016-02-12 at 9 59 30 am

Member

rashidkpc commented Feb 12, 2016

We've been trying to reproduce this, we've tried with a variety of settings and aren't seeing any memory increase over time even when setting --max-old-space-sze=200. Here's a chart after applying that on our internal dashboarding system

$ ps aux|grep node
rashidk+  2042  2.1  7.5 1222400 304876 ?      Ssl  Feb11  31:21 /opt/kibana/bin/../node/bin/node --max-old-space-size=200 /opt/kibana/bin/../src/cli

screen shot 2016-02-12 at 9 59 30 am

@mnhan3

This comment has been minimized.

Show comment
Hide comment
@mnhan3

mnhan3 Feb 12, 2016

So what now. Since I'm not the only one seeing this, there must be a common setup that triggers this issue. My kibana test server hasn't run out of memory yet, but using 1g of ram when idle with a setting of 200M is kind of bad.

On a new 8gig vm, i untared the kiban tar.gz into /opt/kibana, move it out of the kibana-4.4-linux-x86_64 directory. stuck my kibana.yml into the config dir with the apropriate certs for ssl to es. add the --max-old-space-size=200 parameter into kibana in the bin and fired it up as root. cd bin;./kibana& In 11minutes, the result:
top:

36024 root      20   0 1129m 223m 9248 S  1.3  2.8   0:11.27 ./../node/bin/node --max-old-space-size=200 ./../src/cli

A ps:

 ps auxf|grep node
root     36024  1.2  2.8 1160632 232736 pts/0  Sl   11:23   0:11                      \_ ./../node/bin/node --max-old-space-size=200 ./../src/cli
root     36370  0.0  0.0 103304   876 pts/0    S+   11:38   0:00                      \_ grep node

Not even accessing this kibana instance. Its only listening to localhost. It continues to grow at 1M/3s.

so it takes only ~11 minutes to go beyond the 200M limit. Could something in the kibana.index on the ES server be "corrupt"/messed up to cause this. Or is it something to do with ssl?

mnhan3 commented Feb 12, 2016

So what now. Since I'm not the only one seeing this, there must be a common setup that triggers this issue. My kibana test server hasn't run out of memory yet, but using 1g of ram when idle with a setting of 200M is kind of bad.

On a new 8gig vm, i untared the kiban tar.gz into /opt/kibana, move it out of the kibana-4.4-linux-x86_64 directory. stuck my kibana.yml into the config dir with the apropriate certs for ssl to es. add the --max-old-space-size=200 parameter into kibana in the bin and fired it up as root. cd bin;./kibana& In 11minutes, the result:
top:

36024 root      20   0 1129m 223m 9248 S  1.3  2.8   0:11.27 ./../node/bin/node --max-old-space-size=200 ./../src/cli

A ps:

 ps auxf|grep node
root     36024  1.2  2.8 1160632 232736 pts/0  Sl   11:23   0:11                      \_ ./../node/bin/node --max-old-space-size=200 ./../src/cli
root     36370  0.0  0.0 103304   876 pts/0    S+   11:38   0:00                      \_ grep node

Not even accessing this kibana instance. Its only listening to localhost. It continues to grow at 1M/3s.

so it takes only ~11 minutes to go beyond the 200M limit. Could something in the kibana.index on the ES server be "corrupt"/messed up to cause this. Or is it something to do with ssl?

@mnhan3

This comment has been minimized.

Show comment
Hide comment
@mnhan3

mnhan3 Feb 12, 2016

so for grins. I did the following test: On the new vm with kibana, i change the config to point to a nonexistent ES on http://127.0.0.1:9200. Started up kibana. It complains endlessly in the log about missing ES, but the memory usage is a follow:

37084 root      20   0  821m 116m 8212 S  0.0  1.5   0:06.55 ./../node/bin/node --max-old-space-size=200 ./../src/cli

not bad after 7 minutes.

on the other kibana server. I removed the .kibana index and restarted kibana, pointing to the ES behind the ssl proxy with ssl certs configured in kibana.

9826 kibana    20   0 1161m 319m 9240 S  0.3  8.4   0:18.36 /opt/kibana/bin/../node/bin/node --max-old-space-size=200 /opt/kibana/bin/../src/cli

its memory is growing like before, at 319M and upward moving.

Neither kibana is being access. But the kibana pointing at a es instance with certs is using memory at a good 1M/(3-5s)

Definitely something is up with having ssl certs in kibana:
kib using ssl:

9826 kibana    20   0 1574m 717m 9240 S  0.7 18.7   0:42.61 /opt/kibana/bin/../node/bin/node --max-old-space-size=200 /opt/kibana/bin/../src/cli
ps aux|grep kiban
kibana    9826  0.8 18.7 1614488 736304 pts/1  Sl   12:05   0:42 /opt/kibana/bin/../node/bin/node --max-old-space-size=200 /opt/kibana/bin/../src/cli

kib without ssl:

37084 root      20   0  830m 124m 8212 S  0.0  1.6   0:10.80 ./../node/bin/node --max-old-space-size=200 ./../src/cli
ps aux|grep node
root     37084  0.2  1.5 850076 127284 pts/0   Sl   12:02   0:10 ./../node/bin/node --max-old-space-size=200 ./../src/cli

mnhan3 commented Feb 12, 2016

so for grins. I did the following test: On the new vm with kibana, i change the config to point to a nonexistent ES on http://127.0.0.1:9200. Started up kibana. It complains endlessly in the log about missing ES, but the memory usage is a follow:

37084 root      20   0  821m 116m 8212 S  0.0  1.5   0:06.55 ./../node/bin/node --max-old-space-size=200 ./../src/cli

not bad after 7 minutes.

on the other kibana server. I removed the .kibana index and restarted kibana, pointing to the ES behind the ssl proxy with ssl certs configured in kibana.

9826 kibana    20   0 1161m 319m 9240 S  0.3  8.4   0:18.36 /opt/kibana/bin/../node/bin/node --max-old-space-size=200 /opt/kibana/bin/../src/cli

its memory is growing like before, at 319M and upward moving.

Neither kibana is being access. But the kibana pointing at a es instance with certs is using memory at a good 1M/(3-5s)

Definitely something is up with having ssl certs in kibana:
kib using ssl:

9826 kibana    20   0 1574m 717m 9240 S  0.7 18.7   0:42.61 /opt/kibana/bin/../node/bin/node --max-old-space-size=200 /opt/kibana/bin/../src/cli
ps aux|grep kiban
kibana    9826  0.8 18.7 1614488 736304 pts/1  Sl   12:05   0:42 /opt/kibana/bin/../node/bin/node --max-old-space-size=200 /opt/kibana/bin/../src/cli

kib without ssl:

37084 root      20   0  830m 124m 8212 S  0.0  1.6   0:10.80 ./../node/bin/node --max-old-space-size=200 ./../src/cli
ps aux|grep node
root     37084  0.2  1.5 850076 127284 pts/0   Sl   12:02   0:10 ./../node/bin/node --max-old-space-size=200 ./../src/cli
@megastef

This comment has been minimized.

Show comment
Hide comment
@megastef

megastef Feb 13, 2016

Which node.js version do you use? Any change when you switch e.g. to 4.3 or 5.6.0?

Which node.js version do you use? Any change when you switch e.g. to 4.3 or 5.6.0?

@mnhan3

This comment has been minimized.

Show comment
Hide comment
@mnhan3

mnhan3 Feb 15, 2016

Rashidkpc,

same setup but without SSL, kibana maintains itself at 240M with a setting of --max-old-data-size=200. Looks like there is something weird when ssl is added to the equation. We can't run ES without SSL, so that's not an option. Per security protocol, all our web stuff needs to be https, so we have to put both ES and kibana behind an ssl proxy. As soon as i put ES behind an ssl proxy and add the ssl cert option onto kibana, it ramps up to 1.0-1.1 gig of memory resident when set to 200M for --max-old-data-size.

megastef,
whichever version comes with kibana. I can replace nodejs with another version for testing, but corporate policy prevent that from being done in a production enviroment. The approve package must be installed intact. I can only change the nodejs in production enviroment if there is an official patch and that patch gets approval.

mnhan3 commented Feb 15, 2016

Rashidkpc,

same setup but without SSL, kibana maintains itself at 240M with a setting of --max-old-data-size=200. Looks like there is something weird when ssl is added to the equation. We can't run ES without SSL, so that's not an option. Per security protocol, all our web stuff needs to be https, so we have to put both ES and kibana behind an ssl proxy. As soon as i put ES behind an ssl proxy and add the ssl cert option onto kibana, it ramps up to 1.0-1.1 gig of memory resident when set to 200M for --max-old-data-size.

megastef,
whichever version comes with kibana. I can replace nodejs with another version for testing, but corporate policy prevent that from being done in a production enviroment. The approve package must be installed intact. I can only change the nodejs in production enviroment if there is an official patch and that patch gets approval.

@vicvega

This comment has been minimized.

Show comment
Hide comment
@vicvega

vicvega Feb 22, 2016

same issue here with kibana 4.3.1 on a ES (2.1.1) cluster with SSL

vicvega commented Feb 22, 2016

same issue here with kibana 4.3.1 on a ES (2.1.1) cluster with SSL

@shapirus

This comment has been minimized.

Show comment
Hide comment
@shapirus

shapirus Feb 24, 2016

Same issue here on a clean Ubuntu 14.04 install with nothing but basic Ubuntu repositories and Kibana 4.4.1. Bundled NodeJS (v0.12.10).
Kibana keeps leaking memory with nothing accessing it. It doesn't matter SSL or no SSL.

up: I was too quick in my conclusions. The --max-old-space-size=xxx NodeJS option fixed the issue for me.

Same issue here on a clean Ubuntu 14.04 install with nothing but basic Ubuntu repositories and Kibana 4.4.1. Bundled NodeJS (v0.12.10).
Kibana keeps leaking memory with nothing accessing it. It doesn't matter SSL or no SSL.

up: I was too quick in my conclusions. The --max-old-space-size=xxx NodeJS option fixed the issue for me.

@rgolab

This comment has been minimized.

Show comment
Hide comment
@rgolab

rgolab Feb 24, 2016

The same issue on a clean Debian 8.3 with Kibana 4.4.1 (bundled nodejs (v0.12.10) and Elasticsearch 2.2.0. I have no SSL configuration.

rgolab commented Feb 24, 2016

The same issue on a clean Debian 8.3 with Kibana 4.4.1 (bundled nodejs (v0.12.10) and Elasticsearch 2.2.0. I have no SSL configuration.

@megastef

This comment has been minimized.

Show comment
Hide comment
@megastef

megastef Feb 25, 2016

Hi, we made yesterday the update to Kibana 4.3.1 with Node.js 4.3.1 (yes not misstake :) and Elasticsearch 2.2.0 - and set --max-old-space-size=200 and don't see this issue, see the memory chart, RSS is < 200 MB.
https://apps.sematext.com/spm-reports/s/bzf7vcARAR

Why is Kibana using Node 0.12?

Hi, we made yesterday the update to Kibana 4.3.1 with Node.js 4.3.1 (yes not misstake :) and Elasticsearch 2.2.0 - and set --max-old-space-size=200 and don't see this issue, see the memory chart, RSS is < 200 MB.
https://apps.sematext.com/spm-reports/s/bzf7vcARAR

Why is Kibana using Node 0.12?

@megastef

This comment has been minimized.

Show comment
Hide comment
@megastef

megastef Feb 27, 2016

Guys, do your health checks return sometimes with timeouts? I just removed (uncommented health cheks in Kibana source code) and memory does not grow so fast as before. Kibana checks every few seconds the es version and status ...). Look at this chart >18:30 / after last restart.
https://apps.sematext.com/spm-reports/s/wza6E4S6Sf

This is the line removed: https://github.com/elastic/kibana/blob/4.3.1/src/plugins/elasticsearch/index.js#L62

Maybe the Kibana team wants work on those healthCheck functions :)
Here: https://github.com/elastic/kibana/blob/4.3.1/src/plugins/elasticsearch/lib/health_check.js#L64
Checks are scheduled with new timers, instead to create one time an interval. Maybe the timer handling is problematic ...

Guys, do your health checks return sometimes with timeouts? I just removed (uncommented health cheks in Kibana source code) and memory does not grow so fast as before. Kibana checks every few seconds the es version and status ...). Look at this chart >18:30 / after last restart.
https://apps.sematext.com/spm-reports/s/wza6E4S6Sf

This is the line removed: https://github.com/elastic/kibana/blob/4.3.1/src/plugins/elasticsearch/index.js#L62

Maybe the Kibana team wants work on those healthCheck functions :)
Here: https://github.com/elastic/kibana/blob/4.3.1/src/plugins/elasticsearch/lib/health_check.js#L64
Checks are scheduled with new timers, instead to create one time an interval. Maybe the timer handling is problematic ...

@damm

This comment has been minimized.

Show comment
Hide comment
@damm

damm Feb 28, 2016

I can reproduce Kibana growing (or leaking) memory rather.

It's idle; has data on it; timelion plugin installed. 3 hours ago it was at 1166416 bytes and now it's at 1489152

Unfortunately my startup command seems pretty generic nothing long

root     21165  0.2 18.4 2261776 1506084 ?     Sl   Feb03 100:10 bin/../node/bin/node bin/../src/cli

So not sure if this is the right issue?

P.S. I run this under runit

#!/bin/sh
version="4.4.0"
exec 2>&1
cd /opt/kibana-${version}-linux-x64
#cd /opt/kibana-4.1.1-linux-x64
exec chpst -l /var/run/kibana.lock -- bin/kibana

That's my run file

ES is in an Docker container; this is a Linux host running 3.17.0 kernel (bit old) nothing in the runit logs for the past week (haven't touched it since the 17th)

damm commented Feb 28, 2016

I can reproduce Kibana growing (or leaking) memory rather.

It's idle; has data on it; timelion plugin installed. 3 hours ago it was at 1166416 bytes and now it's at 1489152

Unfortunately my startup command seems pretty generic nothing long

root     21165  0.2 18.4 2261776 1506084 ?     Sl   Feb03 100:10 bin/../node/bin/node bin/../src/cli

So not sure if this is the right issue?

P.S. I run this under runit

#!/bin/sh
version="4.4.0"
exec 2>&1
cd /opt/kibana-${version}-linux-x64
#cd /opt/kibana-4.1.1-linux-x64
exec chpst -l /var/run/kibana.lock -- bin/kibana

That's my run file

ES is in an Docker container; this is a Linux host running 3.17.0 kernel (bit old) nothing in the runit logs for the past week (haven't touched it since the 17th)

@megastef

This comment has been minimized.

Show comment
Hide comment
@megastef

megastef Feb 28, 2016

Little Update, since I removed the health checks in my local Kibana, it runs with a stable memory usage <100 MB, see the chart:
https://apps.sematext.com/spm-reports/s/UsFA1OtXcq
We run with "--max-old-space-size=200" using Node 4.3.1 in production.
So the quick fix for growing memory is to uncomment this line

https://github.com/elastic/kibana/blob/4.4.1/src/plugins/elasticsearch/index.js#L62

-> // healthCheck(this, server).start();

Little Update, since I removed the health checks in my local Kibana, it runs with a stable memory usage <100 MB, see the chart:
https://apps.sematext.com/spm-reports/s/UsFA1OtXcq
We run with "--max-old-space-size=200" using Node 4.3.1 in production.
So the quick fix for growing memory is to uncomment this line

https://github.com/elastic/kibana/blob/4.4.1/src/plugins/elasticsearch/index.js#L62

-> // healthCheck(this, server).start();

@damm

This comment has been minimized.

Show comment
Hide comment
@damm

damm Feb 28, 2016

My bad @megastef was right. Removing that line (commenting it out) works; I assumed it started a web server but it just creates the health check loop. Also it's /much/ faster.

damm commented Feb 28, 2016

My bad @megastef was right. Removing that line (commenting it out) works; I assumed it started a web server but it just creates the health check loop. Also it's /much/ faster.

@w33ble

This comment has been minimized.

Show comment
Hide comment
@w33ble

w33ble Feb 29, 2016

Member

Bringing this comment from @megastef back here:

The health and version checks could be your memory consumption problem. I disabled the health check, because it produces 1200 http requests per hour to the ES server, not critical when all is fine, but when the cluster operates at the limit, and it create logs in some places (proxy, ES, kibana, ...) - I removed the health checks to stop flooding the logs with it, but keeping the debug log level for req/res details. The best side effect: The memory consumption was staying flat instead of going up to the sky

Maybe the issue here isn't the max-old-space-size after all. This wouldn't be "old space", but active space, and would explain why the memory just keeps growing despite that setting.

Member

w33ble commented Feb 29, 2016

Bringing this comment from @megastef back here:

The health and version checks could be your memory consumption problem. I disabled the health check, because it produces 1200 http requests per hour to the ES server, not critical when all is fine, but when the cluster operates at the limit, and it create logs in some places (proxy, ES, kibana, ...) - I removed the health checks to stop flooding the logs with it, but keeping the debug log level for req/res details. The best side effect: The memory consumption was staying flat instead of going up to the sky

Maybe the issue here isn't the max-old-space-size after all. This wouldn't be "old space", but active space, and would explain why the memory just keeps growing despite that setting.

@damm

This comment has been minimized.

Show comment
Hide comment
@damm

damm Mar 1, 2016

This is normally cpu flame graphs; but having the JIT working in perf may work for memory as well.

http://www.brendangregg.com/blog/2014-09-17/node-flame-graphs-on-linux.html

damm commented Mar 1, 2016

This is normally cpu flame graphs; but having the JIT working in perf may work for memory as well.

http://www.brendangregg.com/blog/2014-09-17/node-flame-graphs-on-linux.html

@dpedu

This comment has been minimized.

Show comment
Hide comment
@dpedu

dpedu Mar 2, 2016

I'm able to reproduce this - I'm running kibana 4.4.1, as root (sorry) inside a docker container. I'm passing --max-old-space-size=200 to kibana. With or without this flag, memory usage creeps up to around 1200MB over ~12 hours.

No ssl. Default kibana config except for elasticsearch url and kibana listening port. Maybe a couple minutes of use then sitting idle.

dpedu commented Mar 2, 2016

I'm able to reproduce this - I'm running kibana 4.4.1, as root (sorry) inside a docker container. I'm passing --max-old-space-size=200 to kibana. With or without this flag, memory usage creeps up to around 1200MB over ~12 hours.

No ssl. Default kibana config except for elasticsearch url and kibana listening port. Maybe a couple minutes of use then sitting idle.

@w33ble

This comment has been minimized.

Show comment
Hide comment
@w33ble

w33ble Mar 10, 2016

Member

Just pushed 4.4.2, 4.3.3 and 4.1.6, all with updated node versions: https://www.elastic.co/blog/kibana-4-4-2-and-4-3-3-and-4-1-6

This should fix the runaway memory issues users have been seeing, but we'd love some confirmation from some of the users that have commented here. When you get around to upgrading, please drop back in here and let us know if things look better for you too. Thanks!

Member

w33ble commented Mar 10, 2016

Just pushed 4.4.2, 4.3.3 and 4.1.6, all with updated node versions: https://www.elastic.co/blog/kibana-4-4-2-and-4-3-3-and-4-1-6

This should fix the runaway memory issues users have been seeing, but we'd love some confirmation from some of the users that have commented here. When you get around to upgrading, please drop back in here and let us know if things look better for you too. Thanks!

@eadgbe

This comment has been minimized.

Show comment
Hide comment
@eadgbe

eadgbe Mar 11, 2016

Hi, I am using Kibana 4.2.2. Should I try to change to another version or are you also going to update Kibana 4.2.2?

Thanks.

eadgbe commented Mar 11, 2016

Hi, I am using Kibana 4.2.2. Should I try to change to another version or are you also going to update Kibana 4.2.2?

Thanks.

@dpedu

This comment has been minimized.

Show comment
Hide comment
@dpedu

dpedu Mar 11, 2016

4.2.2 looks fixed in my use case! With --max-old-space-size=200 my instance stays well within that limit.

Edit: I meant 4.4.2.

dpedu commented Mar 11, 2016

4.2.2 looks fixed in my use case! With --max-old-space-size=200 my instance stays well within that limit.

Edit: I meant 4.4.2.

@w33ble

This comment has been minimized.

Show comment
Hide comment
@w33ble

w33ble Mar 11, 2016

Member

Should I try to change to another version or are you also going to update Kibana 4.2.2

4.2 is outside of our support window, and while 4.3 and 4.1 are also, we decided to over-deliver on those versions anyway. So you'd need to at least upgrade to 4.3.3 (and consequently ES 2.1 or higher if you're not there already) to get the fix.

Member

w33ble commented Mar 11, 2016

Should I try to change to another version or are you also going to update Kibana 4.2.2

4.2 is outside of our support window, and while 4.3 and 4.1 are also, we decided to over-deliver on those versions anyway. So you'd need to at least upgrade to 4.3.3 (and consequently ES 2.1 or higher if you're not there already) to get the fix.

@eadgbe

This comment has been minimized.

Show comment
Hide comment
@eadgbe

eadgbe Mar 13, 2016

We are currently working on ES 2.0.2, which means that we are bound to Kibana 4.2.2

eadgbe commented Mar 13, 2016

We are currently working on ES 2.0.2, which means that we are bound to Kibana 4.2.2

@GlenRSmith

This comment has been minimized.

Show comment
Hide comment
@GlenRSmith

GlenRSmith Mar 17, 2016

Contributor

@eadgbe What is your obstacle to upgrading to ES 2.1?

Contributor

GlenRSmith commented Mar 17, 2016

@eadgbe What is your obstacle to upgrading to ES 2.1?

@eadgbe

This comment has been minimized.

Show comment
Hide comment
@eadgbe

eadgbe Mar 17, 2016

Hi guys, at the end I upgraded the cluster, and it is working. But, still needed to user the node's parameter:
-max-old-space-size
Thanks.

eadgbe commented Mar 17, 2016

Hi guys, at the end I upgraded the cluster, and it is working. But, still needed to user the node's parameter:
-max-old-space-size
Thanks.

@spalger spalger referenced this issue in elastic/elasticsearch-js Mar 21, 2016

Open

Memory usage #310

@epixa

This comment has been minimized.

Show comment
Hide comment
@epixa

epixa Apr 1, 2016

Member

I'm going to close this since this should be fixed in recent versions of Kibana. There's also a feature request open to make this a little easier to configure: #6727

Member

epixa commented Apr 1, 2016

I'm going to close this since this should be fixed in recent versions of Kibana. There's also a feature request open to make this a little easier to configure: #6727

@epixa epixa closed this Apr 1, 2016

@lifuchao

This comment has been minimized.

Show comment
Hide comment
@lifuchao

lifuchao Oct 15, 2016

I encounter this problem in kibana 4.5.4 and elasticsearch2.3.4. I hope this will solve my issue.if I still have the problem ,Iwill report later.
thanks!

I encounter this problem in kibana 4.5.4 and elasticsearch2.3.4. I hope this will solve my issue.if I still have the problem ,Iwill report later.
thanks!

@lifuchao

This comment has been minimized.

Show comment
Hide comment
@lifuchao

lifuchao Oct 17, 2016

@mnhan3 how did you solve the kibana "--max-old-space-size" problem?
I encounter this problem in kibana 4.5.4 and elasticsearch2.3.4 and Ubuntu14.04.
I have tried --max-old-space-size=200, --max-old-space-size=512,but both failed.

@mnhan3 how did you solve the kibana "--max-old-space-size" problem?
I encounter this problem in kibana 4.5.4 and elasticsearch2.3.4 and Ubuntu14.04.
I have tried --max-old-space-size=200, --max-old-space-size=512,but both failed.

@bm-skutzke

This comment has been minimized.

Show comment
Hide comment
@bm-skutzke

bm-skutzke Oct 31, 2016

I'm running Kibana v4.6.1 in a docker container based on CentOS 7. The RPM package contains Node v4.4.7:

bash-4.2$ /opt/kibana/node/bin/node -v
v4.4.7

I think the correct option for Node is --max_old_space_size and not --max-old-space-size:

bash-4.2$ /opt/kibana/node/bin/node --v8-options | grep max_old_space
--max_old_space_size (max size of the old space (in Mbytes))

Kibana seems to be running fine with following start command and a memory limit of 300 MB for the docker container.

NODE_OPTIONS="--max_old_space_size=128" /opt/kibana/bin/kibana serve -c /config/kibana.yml

bm-skutzke commented Oct 31, 2016

I'm running Kibana v4.6.1 in a docker container based on CentOS 7. The RPM package contains Node v4.4.7:

bash-4.2$ /opt/kibana/node/bin/node -v
v4.4.7

I think the correct option for Node is --max_old_space_size and not --max-old-space-size:

bash-4.2$ /opt/kibana/node/bin/node --v8-options | grep max_old_space
--max_old_space_size (max size of the old space (in Mbytes))

Kibana seems to be running fine with following start command and a memory limit of 300 MB for the docker container.

NODE_OPTIONS="--max_old_space_size=128" /opt/kibana/bin/kibana serve -c /config/kibana.yml

@lifuchao

This comment has been minimized.

Show comment
Hide comment
@lifuchao

lifuchao Nov 2, 2016

@bm-skutzke thanks a lot,all people write "--max-old-space-size" except you ,you point out the correct way is "--max_old_space_size" .I really appreciate it.
by the way ,I used "nohup" startup ,it also result in shutdown when I closed xshell.and when I use "screen" startup,and everting is ok now.

lifuchao commented Nov 2, 2016

@bm-skutzke thanks a lot,all people write "--max-old-space-size" except you ,you point out the correct way is "--max_old_space_size" .I really appreciate it.
by the way ,I used "nohup" startup ,it also result in shutdown when I closed xshell.and when I use "screen" startup,and everting is ok now.

@Floresj4

This comment has been minimized.

Show comment
Hide comment
@Floresj4

Floresj4 Mar 28, 2017

What is the actual solution here? Upgrade or configuration? I'm running Kibana 4.3.2 and have tried both versions of the options --max_old_space_size (as probably documented in the V8 options and --max-old-space-size as several of these issue threads suggest with no successful outcome.

/usr/share/kibana/bin/../node/bin/node --max_old_space_size=512 /usr/share/kibana/bin/../src/cli
6105 root 20 0 1861m 800m 14m S 2.0 20.2 0:34.10 node

What is the actual solution here? Upgrade or configuration? I'm running Kibana 4.3.2 and have tried both versions of the options --max_old_space_size (as probably documented in the V8 options and --max-old-space-size as several of these issue threads suggest with no successful outcome.

/usr/share/kibana/bin/../node/bin/node --max_old_space_size=512 /usr/share/kibana/bin/../src/cli
6105 root 20 0 1861m 800m 14m S 2.0 20.2 0:34.10 node

@epixa

This comment has been minimized.

Show comment
Hide comment
@epixa

epixa Mar 28, 2017

Member

@Floresj4 The latest versions of 4.x and 5.x shouldn't need any max old space wrangling.

Member

epixa commented Mar 28, 2017

@Floresj4 The latest versions of 4.x and 5.x shouldn't need any max old space wrangling.

@Floresj4

This comment has been minimized.

Show comment
Hide comment
@Floresj4

Floresj4 Mar 28, 2017

@epixa thanks for the reply. Is there a solution that does not require upgrading?

@epixa thanks for the reply. Is there a solution that does not require upgrading?

@epixa

This comment has been minimized.

Show comment
Hide comment
@epixa

epixa Mar 28, 2017

Member

I'm not aware of any off-hand if the max_old_space_size setting isn't working for you.

Member

epixa commented Mar 28, 2017

I'm not aware of any off-hand if the max_old_space_size setting isn't working for you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment