Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kibana daemon "--max-old-space-size=250" may cause bundling issue #105

Closed
EwyynTomato opened this issue Jan 18, 2017 · 1 comment
Closed

Comments

@EwyynTomato
Copy link

Issue Description

There's a line in /etc/init.d/kibana:

NODE_OPTIONS="--max-old-space-size=250"

where node is allowed to work with 250mb of memory. This causes out of memory error when options/plugins that requires kibana to re-optimize its dynamic bundle is given.

Steps to reproduce:

  • Copy kibana config kibana.yml to container with server.basePath specified, e.g.
    server.basePath: "/kibana"

  • Upon container starting, kibana service doesn't get started, with the last message in /var/log/kibana/kibana5.log:

# tail -f /var/log/kibana/kibana5.log 
{"type":"log","@timestamp":"2017-01-18T04:23:53Z","tags":["info","optimize"],"pid":176,"message":"Optimizing and caching bundles for kibana, timelion and status_page. This may take a few minutes"}
  • Running kibana from bash reveals more detail:
# NODE_OPTIONS="--max-old-space-size=250" bin/kibana
  log   [05:41:17.718] [info][optimize] Optimizing and caching bundles for kibana, timelion and status_page. This may take a few minutes

<--- Last few GCs --->

  128078 ms: Mark-sweep 236.5 (285.4) -> 236.5 (285.4) MB, 271.2 / 0.0 ms [allocation failure] [GC in old space requested].
  128349 ms: Mark-sweep 236.5 (285.4) -> 236.5 (285.4) MB, 271.1 / 0.0 ms [allocation failure] [GC in old space requested].
  128633 ms: Mark-sweep 236.5 (285.4) -> 241.3 (264.4) MB, 283.6 / 0.0 ms [last resort gc].
  128917 ms: Mark-sweep 241.3 (264.4) -> 246.2 (264.4) MB, 283.9 / 0.0 ms [last resort gc].


<--- JS stacktrace --->

==== JS stack trace =========================================

Security context: 0x35e66f1cfb51 <JS Object>
    1: block_(aka block_) [0x35e66f104381 <undefined>:~2424] [pc=0x6ee4a6691c9] (this=0x35e66f104381 <undefined>)
    2: /* anonymous */(aka /* anonymous */) [0x35e66f104381 <undefined>:2401] [pc=0x6ee489f37f4] (this=0x35e66f104381 <undefined>,loop=0,labels=0x3e4eb95e19d9 <JS Array[0]>)
    3: function_(aka function_) [0x35e66f104381 <undefined>:~2379] [pc=0x6ee4a5b0dda] (this=0x35e66f104381 <u...

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
 1: node::Abort() [bin/../node/bin/node]
 2: 0x1098b2c [bin/../node/bin/node]
 3: v8::Utils::ReportApiFailure(char const*, char const*) [bin/../node/bin/node]
 4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [bin/../node/bin/node]
 5: v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationSpace) [bin/../node/bin/node]
 6: v8::internal::Runtime_AllocateInTargetSpace(int, v8::internal::Object**, v8::internal::Isolate*) [bin/../node/bin/node]
 7: 0x6ee469079a7
Aborted (core dumped)
  • Adding more memory allowed solves the issue:
# NODE_OPTIONS="--max-old-space-size=500" bin/kibana
  log   [05:52:41.636] [info][optimize] Optimizing and caching bundles for kibana, timelion and status_page. This may take a few minutes
  log   [05:54:29.765] [info][optimize] Optimization of bundles for kibana, timelion and status_page complete in 108.12 seconds

Proposal

Currently, I overwrite /etc/init.d/kibana with larger --max-old-space-size. I don't have any solution on how to retrieve the out of memory error message from the daemon (nothing comes out of grepping 'out of memory' error message in /var/log/* ), though. As far as I know, it's silent error.

But I can be sure that kibana is no longer running by running top.

If kibana is still running, process PID shown in /var/run/kibana5.pid will exist with COMMAND:node and USER:kibana, e.g.

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
  285 kibana    20   0 1447696 311784  12152 R 123.9  4.3   1:39.84 node
@spujadas
Copy link
Owner

First of all thanks for reporting this issue so clearly and comprehensively!

Easiest seems for me to add an overridable environment variable that would enable to specify a larger max-old-space-size before starting the container.

Re the process dying silently, it might have been killed by the OOM killer (see #57 and #17 for similar issues) and 'killed process' would in that case show up in /var/log/messages. Having said that, it would still make detecting killed processes a bit of a hassle.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants