Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kibana config console.enabled: false gives: "FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory" issue #11886

Closed
darjisanket opened this issue May 18, 2017 · 6 comments
Labels
feedback_needed Team:Operations Team label for Operations Team

Comments

@darjisanket
Copy link

darjisanket commented May 18, 2017

#

Kibana version: 5.3.1

Elasticsearch version: 5.3.1

Server OS version: linux

Browser version: latest

Browser OS version: Chrome

Original install method (e.g. download page, yum, from source, etc.): yum

Description of the problem including expected versus actual behavior:
I wanted to disable dev tools from kibana gui, so updated console.enabled: false in kibana.yml (conf file) and its crashing after adding this value. before this it was working as expected
Steps to reproduce:

  1. added console.disable: false in kibana.yml
  2. restarted kibana and facing this issue
    kibana_log.txt

Errors in browser console (if relevant):

Provide logs and/or server output (if relevant):
<--- Last few GCs --->

54401 ms: Mark-sweep 193.9 (234.8) -> 193.8 (234.8) MB, 251.8 / 0.0 ms (+ 0.4 ms in 1 steps since start of marking, biggest step 0.4 ms) [allocation failure] [GC in old space requested].
54610 ms: Mark-sweep 193.8 (234.8) -> 193.8 (234.8) MB, 208.9 / 0.0 ms [allocation failure] [GC in old space requested].
54834 ms: Mark-sweep 193.8 (234.8) -> 193.8 (203.8) MB, 224.1 / 0.0 ms [last resort gc].
55060 ms: Mark-sweep 193.8 (203.8) -> 193.7 (203.8) MB, 225.9 / 0.0 ms [last resort gc].

<--- JS stacktrace --->

==== JS stack trace =========================================

Security context: 0xf5e7c3cfb39
1: new Operation [/usr/share/kibana/node_modules/less/lib/less/tree/operation.js:~5] [pc=0x16e9c7b93243] (this=0x2b9242bb7bc9 <an Operation with map 0x233583ed5889>,op=0x27f5747b03f9 <String[1]: *>,operands=0x2b9242bb7b11 <JS Array[2]>,isSpaced=0xf5e7c304381 )
2: arguments adaptor frame: 2->3
4: eval [/usr/share/kibana/node_modules/less/lib/less/tree/negative.js:16] [pc=0x...

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
1: node::Abort() [/usr/share/kibana/bin/../node/bin/node]
2: 0x109b7ac [/usr/share/kibana/bin/../node/bin/node]
3: v8::Utils::ReportApiFailure(char const*, char const*) [/usr/share/kibana/bin/../node/bin/node]
4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [/usr/share/kibana/bin/../node/bin/node]
5: v8::internal::Factory::NewByteArray(int, v8::internal::PretenureFlag) [/usr/share/kibana/bin/../node/bin/node]
6: v8::internal::TranslationBuffer::CreateByteArray(v8::internal::Factory*) [/usr/share/kibana/bin/../node/bin/node]
7: v8::internal::LCodeGenBase::PopulateDeoptimizationData(v8::internal::Handlev8::internal::Code) [/usr/share/kibana/bin/../node/bin/node]
8: v8::internal::LChunk::Codegen() [/usr/share/kibana/bin/../node/bin/node]
9: v8::internal::OptimizedCompileJob::GenerateCode() [/usr/share/kibana/bin/../node/bin/node]
10: v8::internal::Compiler::FinalizeOptimizedCompileJob(v8::internal::OptimizedCompileJob*) [/usr/share/kibana/bin/../node/bin/node]
11: v8::internal::OptimizingCompileDispatcher::InstallOptimizedFunctions() [/usr/share/kibana/bin/../node/bin/node]
12: v8::internal::StackGuard::HandleInterrupts() [/usr/share/kibana/bin/../node/bin/node]
13: v8::internal::Runtime_StackGuard(int, v8::internal::Object**, v8::internal::Isolate*) [/usr/share/kibana/bin/../node/bin/node]
14: 0x16e9c45092a7

Describe the feature:

@jbudz
Copy link
Member

jbudz commented May 18, 2017

Enabling or disabling plugins currently causes the server to rebundle assets and may take ~2gb of memory. We are tracking the removal of this process at #7322.

How much memory does your server have? A workaround if it's limited is to generate assets on a different server and copy the files over.

@jbudz jbudz added Team:Operations Team label for Operations Team feedback_needed labels May 18, 2017
@darjisanket
Copy link
Author

We are running this as a kubernetes pod and my node has enough space:
free -g
total used free shared buff/cache available
Mem: 7 1 0 0 5 5
Swap: 0 0 0

any thoughts on this.

@tylersmalley
Copy link
Contributor

How are you running Kibana? Are you by chance specifying the max_old_space_size?

@darjisanket
Copy link
Author

darjisanket commented May 19, 2017

Yes, we are using NODE_OPTIONS=--max-old-space-size=200
Thanks for your input, I tried changing it to 512 and it worked.
Please suggest me the best value for max-old-space-size for kibana.

@tylersmalley
Copy link
Contributor

I would suggest removing it altogether and monitor usage, most, if not all the issues associated with its usage have been resolved in recent versions including 5.3.1.

@tylersmalley
Copy link
Contributor

If you do want to keep it, temporarily remove the NODE_OPTIONS when installing/removing plugins.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feedback_needed Team:Operations Team label for Operations Team
Projects
None yet
Development

No branches or pull requests

3 participants