Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory #46852

Closed
rashmivkulkarni opened this issue Sep 27, 2019 · 12 comments
Labels
bug Fixes for quality problems that affect the customer experience Team:Operations Team label for Operations Team

Comments

@rashmivkulkarni
Copy link
Contributor

Quite Often seeing this out of memory issue on the PR runs in one of the runs if a test is run in a loop.

  │ proc [kibana] Security context: 0x0c7d3b79e6e1 <JSObject>
     │ proc [kibana]     1: byteLength(aka byteLength) [0x341de7862c9] [buffer.js:~509] [pc=0x21b5ad0dce2c](this=0x2f223be026f1 <undefined>,string=0x0c938c0e2839 <Very long string[184208075]>,encoding=0x0c7d3b7bcca1 <String[4]: utf8>)
     │ proc [kibana]     2: arguments adaptor frame: 3->2
     │ proc [kibana]     3: fromString(aka fromString) [0x341de79cb79] [buffer.js:342] [bytecode=0x2fa5f29f3eb1 offset=74](this=0x...
     │ proc [kibana] 
     │ proc [kibana] FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
     │ proc [kibana]  1: 0x8dc1c0 node::Abort() [/var/lib/jenkins/workspace/elastic+kibana+pull-request/JOB/x-pack-ciGroup2-14/node/linux-immutable/install/kibana/bin/../node/bin/node]
     │ proc [kibana]  2: 0x8dc20c  [/var/lib/jenkins/workspace/elastic+kibana+pull-request/JOB/x-pack-ciGroup2-14/node/linux-immutable/install/kibana/bin/../node/bin/node]
     │ proc [kibana]  3: 0xad60ae v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/var/lib/jenkins/workspace/elastic+kibana+pull-request/JOB/x-pack-ciGroup2-14/node/linux-immutable/install/kibana/bin/../node/bin/node]
     │ proc [kibana]  4: 0xad62e4 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/var/lib/jenkins/workspace/elastic+kibana+pull-request/JOB/x-pack-ciGroup2-14/node/linux-immutable/install/kibana/bin/../node/bin/node]
     │ proc [kibana]  5: 0xec3972  [/var/lib/jenkins/workspace/elastic+kibana+pull-request/JOB/x-pack-ciGroup2-14/node/linux-immutable/install/kibana/bin/../node/bin/node]
     │ proc [kibana]  6: 0xed318f v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/var/lib/jenkins/workspace/elastic+kibana+pull-request/JOB/x-pack-ciGroup2-14/node/linux-immutable/install/kibana/bin/../node/bin/node]
     │ proc [kibana]  7: 0xea2d3b v8::internal::Factory::NewRawTwoByteString(int, v8::internal::PretenureFlag) [/var/lib/jenkins/workspace/elastic+kibana+pull-request/JOB/x-pack-ciGroup2-14/node/linux-immutable/install/kibana/bin/../node/bin/node]
     │ proc [kibana]  8: 0xfec5f3 v8::internal::String::SlowFlatten(v8::internal::Handle<v8::internal::ConsString>, v8::internal::PretenureFlag) [/var/lib/jenkins/workspace/elastic+kibana+pull-request/JOB/x-pack-ciGroup2-14/node/linux-immutable/install/kibana/bin/../node/bin/node]
     │ proc [kibana]  9: 0xad36d4 v8::internal::String::Flatten(v8::internal::Handle<v8::internal::String>, v8::internal::PretenureFlag) [/var/lib/jenkins/workspace/elastic+kibana+pull-request/JOB/x-pack-ciGroup2-14/node/linux-immutable/install/kibana/bin/../node/bin/node]
     │ proc [kibana] 10: 0xae1480 v8::String::Utf8Length() const [/var/lib/jenkins/workspace/elastic+kibana+pull-request/JOB/x-pack-ciGroup2-14/node/linux-immutable/install/kibana/bin/../node/bin/node]
     │ proc [kibana] 11: 0x8f5476  [/var/lib/jenkins/workspace/elastic+kibana+pull-request/JOB/x-pack-ciGroup2-14/node/linux-immutable/install/kibana/bin/../node/bin/node]
     │ proc [kibana] 12: 0x21b5acc878a1 /
     │ info [kibana] exited with null after 6 minutes

May be there is a tweak somewhere in jenkins file to increase the memory on the worker node ?

cc @brianseeders

@rashmivkulkarni rashmivkulkarni added bug Fixes for quality problems that affect the customer experience Team:Operations Team label for Operations Team labels Sep 27, 2019
@elasticmachine
Copy link
Contributor

Pinging @elastic/kibana-operations

@brianseeders
Copy link
Contributor

runbld logs show that this has happened 818 times in the last 24 hours... I think most of the time it doesn't cause a problem that fails the build, but I think it did in @Rasroh's case. See here: https://kibana-ci.elastic.co/job/elastic+kibana+pull-request/4241/JOB=x-pack-ciGroup2-14,node=linux-immutable/consoleFull

@elastic/kibana-operations Do we just need to start setting max_old_space_size for CI? I surprisingly don't see it getting set up anywhere...

@brianseeders
Copy link
Contributor

@elastic/kibana-operations this seems to be failing more and more builds, and happens locally a lot. We should probably discuss how to handle. We can probably bump max_old_space_size temporarily but need to make sure we don't just keep bumping it higher and higher.

@raokrutarth
Copy link

Is there a workaround for this? For example give the kibana container some env flags for the JVM to work with a larger heap? I see this issue occuring everytime I open the "Discover" window and no filter is specified.

@tylersmalley
Copy link
Contributor

@raokrutarth curious, how many index patterns do you have?

@raokrutarth
Copy link

@tylersmalley just 1. But the index has around 850 entries each with over 1000 fields where each field value can be up to 10k chars. I assumed this wouldn't be a big scale.

@jl2035
Copy link

jl2035 commented Mar 23, 2020

I'm also facing this issue and I have 8GB of RAM.

I added NODE_OPTIONS="--max_old_space_size=4096" to /etc/init.d/kibana, under "# Setup any environmental stuff beforehand" and restarted kibana, but it didn't have any effect.

@mistic
Copy link
Member

mistic commented Mar 23, 2020

@jl2035 if you are doing it here I think it should be export NODE_OPTIONS="${NODE_OPTIONS} --max-old-space-size=4096"

@jl2035
Copy link

jl2035 commented Mar 23, 2020

@mistic I tried your suggestion but it also doesn't work.

What exactly is the meaning of this output?

Mar 23 19:07:41 wazuh kibana[1873]: <--- Last few GCs --->
Mar 23 19:07:41 wazuh kibana[1873]: [1873:0x2d20f40] 343466 ms: Mark-sweep 1292.1 (1382.3) -> 1292.0 (1381.3) MB, 1758.3 / 0.0 ms (average mu = 0.474, current mu = 0.000) last resort GC in old space requested
Mar 23 19:07:41 wazuh kibana[1873]: [1873:0x2d20f40] 345232 ms: Mark-sweep 1292.0 (1381.3) -> 1292.0 (1381.3) MB, 1765.6 / 0.0 ms (average mu = 0.295, current mu = 0.000) last resort GC in old space requested

And bellow that:

Mar 23 19:07:41 wazuh kibana[1873]: <--- JS stacktrace --->
Mar 23 19:07:41 wazuh kibana[1873]: ==== JS stack trace =========================================
Mar 23 19:07:41 wazuh kibana[1873]: 0: ExitFrame [pc: 0xc2a378dbe1d]
Mar 23 19:07:41 wazuh kibana[1873]: Security context: 0x23afe5e1e6e9
Mar 23 19:07:41 wazuh kibana[1873]: 1: add [0x23afe5e11831](this=0x064bd2813389 ,0x39219b7b0ad1 <JSArray[0]>)
Mar 23 19:07:41 wazuh kibana[1873]: 2: prepend_comments [0x2c3aafaa4f39] [/usr/share/kibana/node_modules/terser/dist/bundle.min.js:~1] [pc=0xc2a388f9281](this=0x2c3aafa82879 ,t=0x23c8edd419d1 <AST_Dot map = 0x1c9ef34d40b9>)
Mar 23 19:07:41 wazuh kibana[1873]: 3: /* anonymous /(aka / anonym...
Mar 23 19:07:41 wazuh kibana[1873]: FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory

@mistic
Copy link
Member

mistic commented Mar 23, 2020

It means you are hitting your defined memory limits for the process.

Could you please try to add --max-old-space-size=4096 in the bin/kibana file precisely on that line https://github.com/elastic/kibana/blob/master/bin/kibana#L24 ? Please make sure you are using the right option using - and not _

@jl2035
Copy link

jl2035 commented Mar 23, 2020

Now, this worked! Thank you sir! Can I now remove this option?

@mistic
Copy link
Member

mistic commented Mar 23, 2020

@jl2035 you need to keep that option set, otherwise it will use the node defaults and you will experience the out of memory again 😞

@mistic mistic closed this as completed Jun 3, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Fixes for quality problems that affect the customer experience Team:Operations Team label for Operations Team
Projects
None yet
Development

No branches or pull requests

7 participants