Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Out of Memory during garbage collection #5

Open
azriel91 opened this issue Aug 12, 2023 · 2 comments
Open

Out of Memory during garbage collection #5

azriel91 opened this issue Aug 12, 2023 · 2 comments

Comments

@azriel91
Copy link

Heya, I'm using this alongside LSP-rust-analyzer, and getting the following crash:

stack trace
<--- Last few GCs --->

[15621:0x7092010]   231914 ms: Mark-Compact 8049.4 (8233.2) -> 8038.4 (8238.4) MB, 2773.92 / 0.00 ms  (average mu = 0.754, current mu = 0.008) allocation failure; scavenge might not succeed
[15621:0x7092010]   236639 ms: Mark-Compact 8054.4 (8238.4) -> 8043.2 (8243.2) MB, 4691.80 / 0.00 ms  (average mu = 0.500, current mu = 0.007) allocation failure; scavenge might not succeed


<--- JS stacktrace --->

FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
 1: 0xc8d700 node::Abort() [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
 2: 0xb6b8f3  [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
 3: 0xeac370 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
 4: 0xeac657 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
 5: 0x10bdcc5  [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
 6: 0x10d5b48 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
 7: 0x10abc61 v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
 8: 0x10acdf5 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
 9: 0x1089436 v8::internal::Factory::AllocateRaw(int, v8::internal::AllocationType, v8::internal::AllocationAlignment) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
10: 0x107af34 v8::internal::FactoryBase<v8::internal::Factory>::AllocateRawWithImmortalMap(int, v8::internal::AllocationType, v8::internal::Map, v8::internal::AllocationAlignment) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
11: 0x107d7b6 v8::internal::FactoryBase<v8::internal::Factory>::NewRawOneByteString(int, v8::internal::AllocationType) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
12: 0x13cdb17 v8::internal::String::SlowFlatten(v8::internal::Isolate*, v8::internal::Handle<v8::internal::ConsString>, v8::internal::AllocationType) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
13: 0x1506bdd v8::internal::Runtime_StringCharCodeAt(int, unsigned long*, v8::internal::Isolate*) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
14: 0x7f6756699ef6
LSP-file-watcher-chokidar: Watcher process ended. Exception: None
LSP: rust-analyzer crashed (1 / 5 times in the last 180.0 seconds), exit code 101, exception: None

The 8 gigs is what I added to my environment using:

export NODE_OPTIONS="--max-old-space-size=8192"

I couldn't figure out the reason so much memory is used, but the code I work with is relatively large (repo, 55k LOC for the project itself, +429 dependencies).

Can you see something in that stack trace that I can't?

@rchl
Copy link
Member

rchl commented Aug 12, 2023

Are there no other servers running at the same time? The current implementation uses a single file watcher instance for however many servers are started. Can you reproduce this issue with just this single project being opened in ST?

On Mac the LSP-file-watcher-chokidar process seems to be using around 60MB on that project but since the file watching implementations can vary greatly between operating systems, that might not mean much.

@azriel91
Copy link
Author

Heya, I haven't gathered solid evidence, but I think there's only one instance -- I only use LSP-rust-analyzer with sublime text, and I think it keeps at most one instance around.

More importantly, I think the issue is aggravated by what I was doing, which is a combination of the following:

  • Use LSP-rust-analyzer and LSP-file-watcher-chokidar
  • Replace the vendored rust-analyzer binary with a symlink to ~/.cargo/bin/rust-analyzer
  • One of the Rust Analyzer updates within the past 2 weeks changed something (not sure what)
  • That change isn't compatible with LSP-rust-analyzer or this plugin (more likely the former)

I switched back to the vendored RA, and the out-of-memory from the chokidar plugin still happened, with a slightly shorter stack trace:

// I removed all the `LSP-file-watcher-chokidar: ERROR: ` prefixes
<--- Last few GCs --->

[6820:0x6804010]   653289 ms: Mark-Compact 7993.0 (8234.1) -> 7981.7 (8238.6) MB, 3425.54 / 0.00 ms  (average mu = 0.546, current mu = 0.012) allocation failure; scavenge might not succeed
[6820:0x6804010]   658644 ms: Mark-Compact 7997.6 (8238.6) -> 7986.6 (8243.9) MB, 5327.27 / 0.00 ms  (average mu = 0.306, current mu = 0.005) allocation failure; scavenge might not succeed


<--- JS stacktrace --->

FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
 1: 0xc8d700 node::Abort() [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
 2: 0xb6b8f3  [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
 3: 0xeac370 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
 4: 0xeac657 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
 5: 0x10bdcc5  [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
 6: 0x10d5b48 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
 7: 0x10abc61 v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
 8: 0x10acdf5 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
 9: 0x108a366 v8::internal::Factory::NewFillerObject(int, v8::internal::AllocationAlignment, v8::internal::AllocationType, v8::internal::AllocationOrigin) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
10: 0x14e5196 v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [/home/azriel/.nvm/versions/node/v20.2.0/bin/node]
11: 0x7f031bed9ef6
LSP-file-watcher-chokidar: Watcher process ended. Exception: None

It's much stabler now when using the vendored RA, so I guess that means:

  • Probably okay to ignore this issue, since LSP-file-watcher-chokidar doesn't crash very much with the vendored RA.
  • The next time the RA tag is updated in LSP-rust-analyzer, the plugin needs to handle the change that is causing instability1.

1 Sorry I don't have logs from the nightly-RA + LSP-ra interaction -- nothing besides the above stack traces appeared in the sublime text console, so I couldn't work out what the issue was.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants