New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
import CJS from ES6 module allocates all available virtual memory #37314
Comments
Have you tried dynamic import from CJS? Do you similar results? logger=import("debug").catch(console.error);
setTimeout(() => console.log("All Done!"), 86400 * 1000); |
I just tried that and it behaves exactly the same. I had to put a "let" in front of the first line like so:
|
The same as |
Sorry, I should have been more clear. The dynamic import allocates tons of VM, just like the static import in a I tried it both with a These are the files I used to test:
|
Just for clarity: why is allocating 11G of virtual memory a problem? |
This apparently doesn't depend on the |
It doesn't seem to cause problems. Surprisingly, I can run an arbitrary number of these processes that have allocated 11GB of memory (that's approximately equal to all physical memory and all swap on the system combined). The system doesn't slow down, doesn't swap, and doesn't reject memory allocations from other processes. It just seems so odd, and it doesn't happen under CJS, and it started happening at a particular version of node, so I thought someone might want to know. |
Virtual memory is basically just the address space reserved by the program, it's not bound by physical memory at all. |
Should we close this as working as expected? |
Closing as there were no actual problems mentioned (and a long time passed). Please feel free to comment if someone thinks this should be reopened. |
FWIW, this is the callstack that leads to huge allocation: (gdb) where
#0 __GI___mmap64 (addr=0x0, len=len@entry=10737479680, prot=prot@entry=0,
flags=flags@entry=16418, fd=fd@entry=-1, offset=offset@entry=0)
at ../sysdeps/unix/sysv/linux/mmap64.c:48
#1 0x000055555835d534 in v8::base::(anonymous namespace)::Allocate (
page_type=v8::base::(anonymous namespace)::PageType::kPrivate,
access=v8::base::OS::MemoryPermission::kNoAccess, size=10737479680,
hint=<optimized out>) at ../deps/v8/src/base/platform/platform-posix.cc:168
#2 v8::base::OS::Allocate (hint=<optimized out>, size=10737418240,
alignment=65536, access=v8::base::OS::MemoryPermission::kNoAccess)
at ../deps/v8/src/base/platform/platform-posix.cc:370
#3 0x00005555571a7152 in v8::internal::AllocatePages (
page_allocator=0x55555b5a5b20 <v8::internal::(anonymous namespace)::PageAllocatorInitializer::PageAllocatorInitializer()::default_page_allocator>,
hint=hint@entry=0x0, size=size@entry=10737418240,
alignment=alignment@entry=65536,
access=access@entry=v8::PageAllocator::kNoAccess)
at ../deps/v8/src/utils/allocation.cc:177
#4 0x0000555556cbba2f in v8::internal::BackingStore::<lambda()>::operator() (
--Type <RET> for more, q to quit, c to continue without paging--
__closure=0x7fffffffcf40, __closure=0x7fffffffcf40)
at ../deps/v8/src/objects/backing-store.cc:367
#5 std::_Function_handler<bool(), v8::internal::BackingStore::TryAllocateWasmMemory(v8::internal::Isolate*, size_t, size_t, v8::internal::SharedFlag)::<lambda()> >::_M_invoke(const std::_Any_data &) (__functor=...)
at /usr/include/c++/9/bits/std_function.h:285
#6 0x0000555556cbd20a in std::function<bool ()>::operator()() const (
this=0x7fffffffcf40) at /usr/include/c++/9/bits/std_function.h:683
#7 v8::internal::BackingStore::<lambda(const std::function<bool()>&)>::operator()
(__closure=<synthetic pointer>, __closure=<synthetic pointer>, fn=...)
at ../deps/v8/src/objects/backing-store.cc:322
#8 v8::internal::BackingStore::TryAllocateWasmMemory (
isolate=isolate@entry=0x55555b6160b0, initial_pages=initial_pages@entry=1,
maximum_pages=<optimized out>, maximum_pages@entry=65536,
shared=shared@entry=v8::internal::SharedFlag::kNotShared)
at ../deps/v8/src/objects/backing-store.cc:372
#9 0x0000555556cbf556 in v8::internal::BackingStore::AllocateWasmMemory (
isolate=isolate@entry=0x55555b6160b0, initial_pages=initial_pages@entry=1,
--Type <RET> for more, q to quit, c to continue without paging--
maximum_pages=65536, shared=shared@entry=v8::internal::SharedFlag::kNotShared)
at ../deps/v8/src/objects/backing-store.cc:442
#10 0x000055555744fd5d in v8::internal::WasmMemoryObject::New (
isolate=0x55555b6160b0, initial=initial@entry=1, maximum=maximum@entry=-1,
shared=<optimized out>) at ../deps/v8/src/wasm/wasm-objects.cc:876
#11 0x00005555573d3224 in v8::internal::wasm::InstanceBuilder::AllocateMemory (
this=this@entry=0x7fffffffd220)
at ../deps/v8/src/wasm/module-instantiate.cc:1647
#12 0x00005555573dd587 in v8::internal::wasm::InstanceBuilder::Build (
this=0x7fffffffd220) at ../deps/v8/src/wasm/module-instantiate.cc:477
#13 0x00005555573de8d1 in v8::internal::wasm::InstantiateToInstanceObject (
isolate=isolate@entry=0x55555b6160b0, thrower=thrower@entry=0x7fffffffd3e0,
module_object=module_object@entry=..., imports=imports@entry=...,
memory_buffer=memory_buffer@entry=...)
at ../deps/v8/src/wasm/module-instantiate.cc:383
#14 0x0000555557407404 in v8::internal::wasm::WasmEngine::SyncInstantiate (
this=this@entry=0x55555b613b20, isolate=isolate@entry=0x55555b6160b0,
thrower=thrower@entry=0x7fffffffd3e0, module_object=module_object@entry=...,
--Type <RET> for more, q to quit, c to continue without paging--
imports=imports@entry=..., memory=memory@entry=...)
at ../deps/v8/src/wasm/wasm-engine.cc:567
#15 0x00005555574075ad in v8::internal::wasm::WasmEngine::AsyncInstantiate (
this=0x55555b613b20, isolate=isolate@entry=0x55555b6160b0,
resolver=std::unique_ptr<class v8::internal::wasm::InstantiationResultResolver> = {...}, module_object=..., imports=imports@entry=...)
at ../deps/v8/src/handles/maybe-handles.h:38
#16 0x0000555557424096 in v8::(anonymous namespace)::WebAssemblyInstantiate (
args=...) at /usr/include/c++/9/bits/move.h:74
#17 0x000055555680393f in v8::internal::FunctionCallbackArguments::Call (
this=this@entry=0x7fffffffd6c0, handler=..., handler@entry=...)
at ../deps/v8/src/api/api-arguments-inl.h:158
#18 0x0000555556804721 in v8::internal::(anonymous namespace)::HandleApiCallHelper<false> (isolate=isolate@entry=0x55555b6160b0, function=..., function@entry=...,
new_target=..., new_target@entry=..., fun_data=..., receiver=...,
receiver@entry=..., args=...)
at /usr/include/x86_64-linux-gnu/bits/string_fortified.h:34
#19 0x0000555556808c7b in v8::internal::Builtin_Impl_HandleApiCall (args=...,
--Type <RET> for more, q to quit, c to continue without paging--
isolate=isolate@entry=0x55555b6160b0) at ../deps/v8/src/handles/handles.h:132
#20 0x0000555556809aec in v8::internal::Builtin_HandleApiCall (args_length=6,
args_object=0x7fffffffd868, isolate=0x55555b6160b0)
at ../deps/v8/src/builtins/builtins-api.cc:131
#21 0x000055555787c100 in Builtins_CEntry_Return1_DontSaveFPRegs_ArgvOnStack_BuiltinExit () at ../../deps/v8/src/builtins/torque-internal.tq:84
#22 0x00005555575f7a8b in Builtins_InterpreterEntryTrampoline ()
at ../../deps/v8/src/objects/string.tq:183
#23 0x0000309dc1801599 in ?? ()
#24 0x000012075fbf7759 in ?? ()
#25 0x0000000600000000 in ?? ()
#26 0x0000309dc1801669 in ?? ()
#27 0x000012075fbf7449 in ?? ()
#28 0x000010ca215e4191 in ?? ()
#29 0x0000309dc1801599 in ?? ()
#30 0x0000309dc1801599 in ?? ()
#31 0x0000309dc1801599 in ?? ()
#32 0x0000309dc1801599 in ?? ()
--Type <RET> for more, q to quit, c to continue without paging-- |
here node/deps/v8/src/objects/backing-store.cc Line 355 in e83c8ae
here, the node/deps/v8/src/objects/backing-store.cc Line 378 in e83c8ae
|
I am not familiar with |
The problem here is production. If you have a process triggering gobs of virtual memory (in my case, 42-45GB for a process running a 1.2GB max old space size, which is beyond the pale), then you're likely to get alerts about system resources. I may be looking at two issues here, but I found this after getting an alert on low swap space. Other than virtual memory, we're at 50% memory usage on this box so I'm not sure where this would be coming from. What's the workaround here? I don't see one. |
My suspicion is that I'm running into this problem due to require->import loops. There's a common way to construct mjs/cjs modules where they have an mjs file that imports src/index.js, which shows up where an MJS module I use is relying on a library that does both. |
Disabling wasm (--noexpose_wasm) fixes this problem for me, see #29103 |
Weird. Any perf changes you notice with that @frelars ? |
Not really, but it fixed the red color in htop (11GB virt) which was driving me a bit crazy to be honest :) |
This does appear to work. (Node 18) FWIW, I think I noticed this problem shortly after adding opentelemetryjs to my project. If anyone is looking for a repro case, that may be a useful place to start. |
Interesting, I also installed open telemetry recently, only the core library not the auto-instrement package for node. I was thinking it was strange that I did not see same virt-mem behavior in a very similar system (same node version, mostly same packages but not OTL). |
Nope, just removed otl from current project and virt-mem still at 11GB without nowasm-switch |
The wasm workaround seems to cause intermittent issues with llhttp and something to do with JWT. Had to turn it off today to allow a release to go out. I think some Google toolkit overrides or monkey patching may also be involved, but something is trying to look up WebAssembly and failing. Intermittently, which is pretty weird. |
@frelars I bumped into code in the otel stack that should definitely trigger the problem described here, but it's far from the only code that could have that pattern. Sounds like you may have other copies of the repro case in your code. I am also not using autoinstrument. And at the moment I'm only using telemetry, not tracing. |
Still curious: does this cause any observable issues except for the number displayed in htop? Virtual memory isn't a resource on 64-bit machines and isn't a meaningful measure of anything, For example, it's typical for Chromium to allocate tens of terabytes of virtual memory on a normal Linux laptop. Fwiw, it should be just removed from the default htop config. |
What steps will reproduce the bug?
The file
testvm.mjs
contains:running this file via
node testvm.mjs
causes the node process to allocate 11 Gigabytes of virtual memory.This file,
testvm.js
does not cause the problem:How often does it reproduce? Is there a required condition?
Happens every time. Node >= 14.14.0 on Scientific Linux 7.4, including v15.8.0. File package.json contains:
What is the expected behavior?
Process does not allocate huge amounts of VM
What do you see instead?
Process allocates huge amounts of VM.
Additional information
It doesn't matter what module is imported (I tried "ms" also) and you don't have to execute anything from the imported file. If running with node inspect, the VM allocation happens before the initial breakpoint. The physical memory allocation is not affected. Happens also with v16.0.0-nightly20210210857fbdb13f.
The text was updated successfully, but these errors were encountered: