Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Node v12.16.2 memory issues after upgrade from v8 (<--- Last few GCs --->) #33266

Closed
RobertDittmann opened this issue May 6, 2020 · 38 comments
Closed
Labels
confirmed-bug Issues with confirmed bugs. memory Issues and PRs related to the memory management or memory footprint. v8 engine Issues and PRs related to the V8 dependency.

Comments

@RobertDittmann
Copy link

RobertDittmann commented May 6, 2020

This issue is continuation of #32737

Version:
v12.16.2 (but v12.13 on production)

Platform:
Darwin Kernel Version 19.4.0: Wed Mar 4 22:28:40 PST 2020; root:xnu-6153.101.6~15/RELEASE_X86_64 x86_64
(but docker with node:12.13.0-alpine on production)

Subsystem:
? runtime, heap, garbage collection

Description:
As in previous ticket: "We recently upgrade our production servers with docker containers with node v8 to docker containers with node v12.10 (node:12.13.0-alpine). At first all seems fine, but then we started noticing pod restarts by Kubernetes being OOM Killed. Since the upgrade, memory usage seems to increase over time sometimes in steep inclines until reaching ~500MB at which time they are killed by Kuberenetes."

With the same code base and dependencies, when switching between 3 versions of node (8.17.0, 10.20.1, 12.16.2) different memory usage observed. With version 12.16.2 node service crashes with logs:

TESTED_SERVICE.GetData took 1691 ms (queue-time = 409 ms, process-time = 1282 ms, processing-count = 100, queue-size = 124)"}
{"@timestamp":"2020-05-06T10:49:42.337Z","level":"debug","message":"GRPC server call TESTED_SERVICE.GetData took 1724 ms (queue-time = 431 ms, process-time = 1293 ms, processing-count = 100, queue-size = 123)"}

<--- Last few GCs --->
cr[35106:0x102aac000] 10407728 ms: Mark-sweep 543.8 (546.1) -> 543.7 (546.1) MB, 158.9 / 0.0 ms  (+ 2.9 ms in 2 steps since start of marking, biggest step 2.9 ms, walltime since start of marking 163 ms) (average mu = 0.102, current mu = 0.010) finalize incr[35106:0x102aac000] 10407914 ms: Mark-sweep 543.8 (546.1) -> 543.7 (546.1) MB, 177.3 / 0.0 ms  (+ 5.1 ms in 2 steps since start of marking, biggest step 5.0 ms, walltime since start of marking 186 ms) (average mu = 0.058, current mu = 0.018) finalize incr

<--- JS stacktrace --->

==== JS stack trace =========================================

    0: ExitFrame [pc: 0x10097d5b9]
    1: StubFrame [pc: 0x1009e8f05]
Security context: 0x1fc1cc0c08d1 <JSObject>
    2: new constructor(aka Op) [0x1fc1b415e939] [/Users/robertdittmann/Documents/Tutorials/node-memory-test/node_modules/protobufjs/src/writer.js:21] [bytecode=0x1fc1cf4764f1 offset=0](this=0x1fc1ca0d2b61 <Op map = 0x1fc11cbd1199>,0x1fc1b415e979 <JSFunction noop (sfi = 0x1fc1712aee81)>,0,0)
    3: ConstructFrame [pc: 0x1008fe7...

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
 1: 0x1010248bd node::Abort() (.cold.1) [/usr/local/bin/node]
 2: 0x100084c4d node::FatalError(char const*, char const*) [/usr/local/bin/node]
 3: 0x100084d8e node::OnFatalError(char const*, char const*) [/usr/local/bin/node]
 4: 0x100186477 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/usr/local/bin/node]
 5: 0x100186417 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/usr/local/bin/node]
 6: 0x1003141c5 v8::internal::Heap::FatalProcessOutOfMemory(char const*) [/usr/local/bin/node]
 7: 0x100315a3a v8::internal::Heap::RecomputeLimits(v8::internal::GarbageCollector) [/usr/local/bin/node]
 8: 0x10031246c v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [/usr/local/bin/node]
 9: 0x10031026e v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/usr/local/bin/node]
10: 0x10030f2b1 v8::internal::Heap::HandleGCRequest() [/usr/local/bin/node]
11: 0x1002d4551 v8::internal::StackGuard::HandleInterrupts() [/usr/local/bin/node]
12: 0x10063e79c v8::internal::Runtime_StackGuard(int, unsigned long*, v8::internal::Isolate*) [/usr/local/bin/node]
13: 0x10097d5b9 Builtins_CEntry_Return1_DontSaveFPRegs_ArgvOnStack_NoBuiltinExit [/usr/local/bin/node]
14: 0x1009e8f05 Builtins_StackCheckHandler [/usr/local/bin/node]
[1]    35106 abort      node --max-old-space-size=384 app.js

What steps will reproduce the bug?

  1. Download prepared sample "slim version" of service code (without other parts like Redis, DynamoDB, Prometheus, Zippkin, Routes etc.): node-sample
  2. Download prepared sample client: java-sample
  3. Change node version on 12.16.2
  4. For node service (rebuild and run):
rm -rf node-modules
npm install
node --max-old-space-size=384 app.js
  1. For java service (rebuild and run):
mvn clean install
mvn spring-boot:run
  1. After around 3-4 hours node service should throws above exception. Node service in its directory will fill with data csv file called memory_usage.csv (it contains process memory in MB per 1 minute).

Same situation appears on production environment but it takes few days to happen.

Below comparison of node vesions:

  • node v12.16.2 started with command: node --max-old-space-size=384 app.js (crashed - results as above logs)
    image

  • node v12.16.2 started with command: node app.js
    image

  • node v10.20.1 started with command: node app.js (it shows also memory when load stopped)
    image

  • node v8.17.0 started with command: node app.js
    image

How often does it reproduce? Is there a required condition?
Always.

What is the expected behavior?
A stable heapUsed like in v8.17 and no spikes in memory usage causing OOM kills/ GCs issues.

What do you see instead?
Memory increase and GCs issues.

Additional information
I am looking for solutions. Seems that used last years services cannot be used with LTS v12 on production.

Please let me know how I can help further,

Kind regards,
Robert

@RobertDittmann
Copy link
Author

I did also comparison of v10.20.1 with set --max-old-space-size and without. Heap usage is same for both cases - no impact.

  • node v10.20.1 started with command: node app.js

Screenshot 2020-05-07 at 10 35 41

  • node v10.20.1 started with command: node --max-old-space-size=384 app.js

Screenshot 2020-05-07 at 10 36 04

@MylesBorins
Copy link
Member

/cc @mmarchini who I think was chasing down some other similar issues

@mmarchini
Copy link
Contributor

Thank you @RobertDittmann for the comprehensive report. I don't have bandwidth to investigate this right now, but hopefully someone else can take a look. One thing that might be helpful is providing a Node.js client instead of a Java client, since many of us might not have Java installed.

You're seeing a similar issue on v12.13, is that correct? If that's the case, I wonder if we have an actual leak in core instead of a GC bug on V8.

One thing that might be helpful (for anyone looking into this issue) is taking a heap profile of the application on different Node.js versions, to confirm that the allocation patterns are the same or if there's actually extra allocations happening). A heap snapshot might also be useful to show which objects are leaking (if that's the case).

@RobertDittmann
Copy link
Author

You're seeing a similar issue on v12.13, is that correct?

Correct. v12.13 was deployed on production with same issue. 1st graph from #32737

One thing that might be helpful is providing a Node.js client instead of a Java client, since many of us might not have Java installed.

If I will have time to prepare Node.js client next week will provide it as well.

@mmarchini
Copy link
Contributor

I was able to run locally by the way, the heap profiler seems like a good start (the results on 10 and 12 are very different)

@mmarchini
Copy link
Contributor

mmarchini commented May 8, 2020

It seems like the leak is coming from getObjectFromJson. If you comment out the body on that function memory doesn't seem to grow uncontrollably. I don't know why it's happening though, running just that function on a loop doesn't leak memory.

(edit: nevermind, I was running on 10 by mistake 🤦)
(Edit 2: turns out the assumption above was right)

@RobertDittmann
Copy link
Author

RobertDittmann commented May 9, 2020

It seems like the leak is coming from getObjectFromJson. If you comment out the body on that function memory doesn't seem to grow uncontrollably.

@mmarchini very good point !! Indeed I did tests with commenting out mentioned method body and results are (stable heap):
Screenshot 2020-05-09 at 20 48 49

But still do not know why for same source code memory usage is different between v8, v10 and v12.

Method getObjectFromJson converts stringified json to object when possible or returns already ready to go object. This method is executed 2 times in given example. That is why I did another one test with grpc call modification:

async function getData(call, callback) {
    const data = await testedService.getData(call.request);
    // const grpcData = grpcMapper.convertToGrpcData(data);
    try {
        const parsedObject = JSON.parse({test: 12345});
    } catch (err) {
        // ignore
    }
    try {
        const parsedObject = JSON.parse({test: 12345});
    } catch (err) {
        // ignore
    }
    callback(null, {
        state: grpcResponseStateEnum.OK,
        data: null,
    });
}

Now heap is increasing:
Screenshot 2020-05-09 at 20 50 26

It seems that there could be bug in:

  • JSON.parse for cases when error appears
    or
  • try catch block ?

We have several dozen of Node.js services and for some of them impact is bigger for others could be not visible yet. We experienced it after changing Node.js version as mentioned before.

@SkeLLLa
Copy link

SkeLLLa commented May 9, 2020

I've got something similar to this one, but mostly on node 14 (14.0.0 and 14.2.0).
In my case only RSS memory grows. For example in simple node app could use 512Mb rss, when heap used and total are around 10-20Mb.
I was able to notice slow rss growth on bare http server require('http').createServer(() =>{}).listen(3000) but it's very slow. But more code you add, more load you provide on it, GC invokes more frequently and faster rss memory grows.

Usually RSS grows up to 512Mb and after that it remains on the same level for a long time. Also tried to add "--optimize-for-size" and other GC related stuff - it doesn't help.

However downgrading node to 12.14.0 this rss growth is less. And for the same app under the same load rss stabilize at 200Mb.

Created simple test here (with node 12.12 and node 14):
https://github.com/SkeLLLa/node-14-12-memory

On linux I have flowing results:
node 14 - 10000 requests, 450Mb of RSS after test and it not decreasing.
node 12 - 10000 requests, 120Mb of RSS after test and in a minute it decreased to 90Mb.

@RobertDittmann
Copy link
Author

It looks that memory leak is with cases similar to:

try {
        const parsedObject = JSON.parse({test: 12345});
    } catch (err) {
        // ignore
    }

For:

try {
        const parsedObject = JSON.parse('{"test": "12345"}');
    } catch (err) {
        // ignore
    }

and

  try {
        throw new Error();
    } catch (err) {
        // ignore
    }

seems to be fine.

@mmarchini
Copy link
Contributor

@SkeLLLa that sounds like a different issue. Do you mind opening a new issue in the repo to track that?

@RobertDittmann the issue is likely some edge case, and not general JSON.parse or try/catch usage. Trying to parse an object might be the edge case. Either way, this does sound like an issue on V8, and we should report it upstream once we get a more narrow reproducible.

@RobertDittmann
Copy link
Author

I see that for other services where also memory issue exists there is no JSON parsing operation etc., so it seems that there is something wrong with way how V8 works.

@MikevPeeren
Copy link

MikevPeeren commented May 18, 2020

We are also having issue's with memory after upgrading from 8 to 12.

This is our memory usage after the upgrade it slowly creeps up and up.
Schermafbeelding 2020-05-18 om 05 27 20

Any advice on what to do ?

@wdittmer-mp
Copy link

We downgraded to Node v10.20.1 for now.

@MikevPeeren
Copy link

We downgraded to Node v10.20.1 for now.

@wdittmer-mp because it is not fixable at this time or ? Is this just Node 12 related ?

@wdittmer-mp
Copy link

Starting from v12 (I think we also saw it with v13) we see this issue. As a temporary workaround we found that v10.20.1 has no issues and downgraded to that, so that at least we have a LTS version and production is stable.

@RobertDittmann
Copy link
Author

@MikevPeeren it is Node 12 related (did not test other versions cause we use LTS). Services are infected with poor memory handling. GC crashes services. Just for comparison we did loadtests on same service with node 10 and 12 and results are like below.

V12 - 2 days running (services restarted then memory drop and again the same issue):
PastedGraphic-3

V10 after more than 4 days:
PastedGraphic-2

@MikevPeeren
Copy link

@RobertDittmann Thanks for the great comparison, is there any news about this from Node itself or even acknowledgement on it ? I assume they want to fix this.

@mmarchini
Copy link
Contributor

@MikevPeeren @wdittmer-mp would be good to have reproducibles for your issues as well, to make sure they are the same as this one (and not a different memory issue).

I see that for other services where also memory issue exists there is no JSON parsing operation etc., so it seems that there is something wrong with way how V8 works.

Interesting, it might be something more subtle then (like having a builtin call inside try/catch on some very specific situations).

is there any news about this from Node itself or even acknowledgement on it

Best we can tell so far this seems like an issue on V8, not Node.js. The reproducible shared by @RobertDittmann is great for us to analyse, but it's too big to share with the V8 team, so we need to find a smaller one.

@RobertDittmann
Copy link
Author

@mmarchini I created ticket for V8 and I am waiting for their feedback. https://bugs.chromium.org/p/v8/issues/detail?id=10538

@mmarchini
Copy link
Contributor

Great! It might be worth sharing a bit more context there. A heap profile and a heap snapshot, for example. Maybe also the output from --trace-gc. I'll try to do so when I have some extra time, but if you want to do it first, go for it :)

@wdittmer-mp
Copy link

@MikevPeeren @wdittmer-mp would be good to have reproducibles for your issues as well, to make sure they are the same as this one (and not a different memory issue).

@RobertDittmann is my colleague working on the same project 😅, hence no other repro.
We will try to narrow it down further, but need to manage time a bit.

@RobertDittmann
Copy link
Author

@mmarchini I updated both repositories. You can pull changes. Now it is possible to run both versions: previous one with grpc, queue and decorators and new test is with simple REST call. Steps to reproduce issue with REST test:

  1. Download prepared sample: node-sample
  2. Download prepared sample client: java-sample
  3. Change node version on 12.16.2
  4. For node service (rebuild and run):
rm -rf node-modules
npm install
node --max-old-space-size=384 app.js
  1. For java service (rebuild and run):
mvn clean install
mvn spring-boot:run -Drun.arguments="rest"
  1. After around 36 minutes node service should crash with below exception. Node service in its directory will fill with data csv file called memory_usage.csv (it contains process memory in MB per 1 minute).
{"@timestamp":"2020-05-23T19:30:59.186Z","level":"warn","message":"Event loop lag detected, latency=251.34392385473603 ms."}

<--- Last few GCs --->

[65769:0x102a6b000]  2173342 ms: Scavenge 375.0 (378.9) -> 374.0 (378.9) MB, 0.9 / 0.1 ms  (average mu = 0.392, current mu = 0.386) allocation failure 
[65769:0x102a6b000]  2173348 ms: Scavenge 375.3 (378.9) -> 374.8 (379.6) MB, 1.1 / 0.0 ms  (average mu = 0.392, current mu = 0.386) allocation failure 
[65769:0x102a6b000]  2173355 ms: Scavenge 375.9 (379.6) -> 375.5 (384.6) MB, 1.1 / 0.0 ms  (average mu = 0.392, current mu = 0.386) allocation failure 


<--- JS stacktrace --->

==== JS stack trace =========================================

    0: ExitFrame [pc: 0x10097d5b9]
Security context: 0x34f8ab3408d1 <JSObject>
    1: bind [0x34f8ab340e09](this=0x34f81f109d21 <JSFunction updateOutgoingData (sfi = 0x34f8995689d9)>,0x34f885f804b1 <undefined>,0x34f826467461 <Socket map = 0x34f82fd736a9>,0x34f826469549 <Object map = 0x34f82fd76499>)
    2: parserOnIncoming(aka parserOnIncoming) [0x34f81f10a149] [_http_server.js:749] [bytecode=0x34f8e3627961 offset=220](this=0x34f885f804b1...

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
 1: 0x1010248bd node::Abort() (.cold.1) [/usr/local/bin/node]
 2: 0x100084c4d node::FatalError(char const*, char const*) [/usr/local/bin/node]
 3: 0x100084d8e node::OnFatalError(char const*, char const*) [/usr/local/bin/node]
 4: 0x100186477 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/usr/local/bin/node]
 5: 0x100186417 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/usr/local/bin/node]
 6: 0x1003141c5 v8::internal::Heap::FatalProcessOutOfMemory(char const*) [/usr/local/bin/node]
 7: 0x100315a3a v8::internal::Heap::RecomputeLimits(v8::internal::GarbageCollector) [/usr/local/bin/node]
 8: 0x10031246c v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [/usr/local/bin/node]
 9: 0x10031026e v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/usr/local/bin/node]
10: 0x10030f2b1 v8::internal::Heap::HandleGCRequest() [/usr/local/bin/node]
11: 0x1002d4551 v8::internal::StackGuard::HandleInterrupts() [/usr/local/bin/node]
12: 0x10063e79c v8::internal::Runtime_StackGuard(int, unsigned long*, v8::internal::Isolate*) [/usr/local/bin/node]
13: 0x10097d5b9 Builtins_CEntry_Return1_DontSaveFPRegs_ArgvOnStack_NoBuiltinExit [/usr/local/bin/node]
[1]    65769 abort      node --max-old-space-size=384 app.js

Results for node 12.16.2
Screenshot 2020-05-23 at 23 44 45

Results for node 10.20.1
Screenshot 2020-05-23 at 23 48 04

Hope now it is "simple" enough :)

@RobertDittmann
Copy link
Author

This is another test with using only REST calls which shows how memory behaves when load stops. In v10 it is always reduced to 20MB with v12 it is very different.

v12.16.2
Screenshot 2020-05-24 at 11 38 58

v10.20.1
Screenshot 2020-05-24 at 11 38 50

@mmarchini
Copy link
Contributor

@RobertDittmann great! Using a simpler stack is good. Any chance you can replace the Java service with an HTTP load tester like autocannon? I'm also wondering if we can speed up this test a bit (maybe increase the RPS on the client, if the server isn't saturated), 36 minutes is a long time for a test, especially when comparing different Node.js versions.

@RobertDittmann
Copy link
Author

RobertDittmann commented May 26, 2020

I did small changes in java and node service (pull required). You can run node with command:

node --max-old-space-size=128 app.js

After 3 minutes it will rach 128 - then 6-7 minutes to failure (5-6 times faster) - no need to set memory to 384. It is handling around 5k requests with 200 status (per second). I will check autocannon tomorrow.

Some graphs from today:

v12.16.2 and crashed
Screenshot 2020-05-26 at 17 10 49

v10.20.1
Screenshot 2020-05-26 at 17 11 32

Now memory clearing. Both cases are similar. Around 2nd minute stopped java service and started around 4th minute and then again stopped at 5th.

v12.16.2
Screenshot 2020-05-26 at 17 12 58

time (min) | rss | heapTotal | heapUsed
-- | -- | -- | --
1 | 117 | 82 | 53
2 | 153 | 116 | 88
3 | 121 | 85 | 82
4 | 121 | 85 | 82
5 | 163 | 125 | 104
6 | 157 | 119 | 117
7 | 157 | 119 | 117
8 | 157 | 119 | 117
9 | 157 | 119 | 117

v10.20.1
Screenshot 2020-05-26 at 17 14 02

time (min) | rss | heapTotal | heapUsed
-- | -- | -- | --
1 | 103 | 71 | 44
2 | 110 | 76 | 40
3 | 52 | 21 | 16
4 | 52 | 21 | 16
5 | 117 | 84 | 47
6 | 110 | 78 | 46
7 | 53 | 22 | 16
8 | 53 | 22 | 16
9 | 53 | 22 | 16
10 | 53 | 22 | 16
11 | 53 | 22 | 16

@mmarchini
Copy link
Contributor

Thanks, I'll try it today. I was hoping we could make the test grow memory faster instead of crashing faster by reducing the heap size. Growing memory faster gives us relevant information we can share with V8 faster, whereas crashing doesn't give much information we can use.

I'll try it today with the most recent changes and then share the results with V8.

@RobertDittmann
Copy link
Author

RobertDittmann commented May 26, 2020

It grows faster. You have 128MB within 3 minutes not 14.

@RobertDittmann
Copy link
Author

According to comment from https://bugs.chromium.org/p/v8/issues/detail?id=10538#c7:

Alright, it's a leak fixed by the GC team :)

https://chromium-review.googlesource.com/c/v8/v8/+/1967385 is the fix.

When we throw a JSON parse error, we create a fake "script" on the fly. Apparently we allocate a slot for it that we'd never get rid of. Over time this gets pretty costly...

Any info if this is or can be part of LTS version and which ?

@mmarchini
Copy link
Contributor

Any info if this is or can be part of LTS version and which ?

Will need to evaluate if the changes are backportable, but I think it's worth trying, yes. Good thing they found it! I just came up with an isolated test case and would share there, but apparently they beat me to it :D

@mmarchini
Copy link
Contributor

Ok, seems to be fixed on v14, so we only need to backport to v12.

@mmarchini mmarchini added v12.x v8 engine Issues and PRs related to the V8 dependency. memory Issues and PRs related to the memory management or memory footprint. confirmed-bug Issues with confirmed bugs. labels May 26, 2020
@mmarchini mmarchini linked a pull request May 26, 2020 that will close this issue
4 tasks
@Golorio
Copy link

Golorio commented Jun 3, 2020

Any news with the backporting, I'd love to see the RSS drop in my node process.
Really good job finding this issue.

@lundibundi
Copy link
Member

@Golorio the backport for v12 is ready (#33573), it'll release with the next Node.js 12 release.

@Golorio
Copy link

Golorio commented Jun 3, 2020

@lundibundi Alright, I will wait for the v12 release to update my node version. There isn't any other quick way in the meantime, is there?

@mmarchini
Copy link
Contributor

@Golorio this is a very specific leak, so it would be good to confirm if you're experiencing it or something else. The cause is described here:

When we throw a JSON parse error, we create a fake "script" on the fly. Apparently we allocate a slot for it that we'd never get rid of. Over time this gets pretty costly...

In other words, every time JSON.parse throws it will allocate a small amount of memory which will not be released (thus the leak). On applications where JSON.parse is not used (directly or by a dependency), or JSON.parse is guaranteed to never fail, or where the application exits when it fails, this leak won't happen. It will also only happen on v11, v12, and v13 (v11 and v13 are not supported anymore though). Unfortunately, I don't think there's any workaround for it, since it is a bug very deep on V8. The closest to an immediate workaround would be downgrading to v10 or upgrading to v14 (but note that v14 is not LTS yet, so it's not advised for most users).

If you're not sure if that's the same leak, I suggest opening a new issue filling the template, so folks can help investigate. Or, if you want to wait for the next release to try it, the ETA is to have a release candidate next week and the actual release two weeks from now.

@Golorio
Copy link

Golorio commented Jun 4, 2020

@mmarchini I see, thank you very much. I do believe my issue is connected to this one since I do use JSON.parse extensively in my code, have an rss leak, are in node v12 and if I remember correctly the issue started after upgrading from v10 although now I can't downgrade.

@Golorio
Copy link

Golorio commented Jun 17, 2020

Unfortunately, it seems this was not added on v12.18.1. I'll wait for v12.18.2. Regardless, good work on the updates.

@mmarchini
Copy link
Contributor

The fix is included in v12.18.2. @RobertDittmann let us know if this release fixes your memory leak.

I'll close the issue now, if anyone else is still experiencing memory leaks on v12.18.2, I'd recommend opening another issue since it's probably a different memory leak (feel free to ping me in that case).

@RobertDittmann
Copy link
Author

The fix is included in v12.18.2. @RobertDittmann let us know if this release fixes your memory leak.

I'll close the issue now, if anyone else is still experiencing memory leaks on v12.18.2, I'd recommend opening another issue since it's probably a different memory leak (feel free to ping me in that case).

Hi it seems that it works fine after fix. Thanks !

Screenshot 2020-07-08 at 16 09 35

Screenshot 2020-07-08 at 16 09 44

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
confirmed-bug Issues with confirmed bugs. memory Issues and PRs related to the memory management or memory footprint. v8 engine Issues and PRs related to the V8 dependency.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants