Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Buffer and ArrayBuffer-based objects trigger mark-sweeps instead of scavenges #1671

Closed
ChALkeR opened this issue May 11, 2015 · 78 comments
Closed
Assignees
Labels
buffer Issues and PRs related to the buffer subsystem. memory Issues and PRs related to the memory management or memory footprint. performance Issues and PRs related to the performance of Node.js. v8 engine Issues and PRs related to the V8 dependency.

Comments

@ChALkeR
Copy link
Member

ChALkeR commented May 11, 2015

Testcase:

'use strict';

var count = 0, limit = 800000;

function call() {
    var _buffer = new Buffer(16 * 1024);
    count++;
    if (count > limit) {
        process.exit(0);
    }
    setImmediate(call);
}

for (var i = 0; i < 20; i++) {
    call();
}

var gcs = new (require('gc-stats'))();
gcs.on('stats', function(stats) {
    console.log(JSON.stringify(stats));
});

You could see that this causes a lot of gctype: 2 GC events.

@benjamingr
Copy link
Member

I really appreciate your finds lately, but I'm not sure what you expect io to do about this.

io.js uses native v8 typed arrays since this commit by @bnoordhuis in 2013.

@ChALkeR
Copy link
Member Author

ChALkeR commented May 11, 2015

It depends on the size of the typed array or Buffer.

  • new Uint8Array(16 * 1024) — gctype = 2
  • new Uint8Array(1024) — gctype = 1
  • new Buffer(16 * 1024) — gctype = 2
  • new Buffer(1024) — gctype = 2
  • new Buffer(100) — gctype = 1

@ChALkeR
Copy link
Member Author

ChALkeR commented May 11, 2015

@benjamingr If this will be confirmed as an upstream (v8) bug, it should be reported to the upstream.
I am not yet sure about the exact cause behind this, first impression might be wrong and I do not have a testcase against pure v8. And even if the problem in upstream, it could be kept here for a reference (in an open or in a closed state).

@benjamingr
Copy link
Member

To be clear, I didn't criticize you opening an issue - I was asking what you think or expect io to be able to do about this.

This doesn't reproduce on "pure" d8?

@ChALkeR
Copy link
Member Author

ChALkeR commented May 11, 2015

I don't have a recent d8 version at hand (the one that I have installed is 3.30.something). I will check a bit later, hopefully today.

Most probably I expect this to be reported to upstream.

And I was not sure that the reason for Buffer objects not being fast collected is the same as the reason for typed arrays not being fast collected, but seeing that they both behave the same on smaller sizes, I guess now that's it's probably the same reason.

@ChALkeR
Copy link
Member Author

ChALkeR commented May 11, 2015

With new Buffer(16 * 1024) objects, roughly half of the total time in the above testcase is spent on GC. 1321 ms — gc, 1310 ms of which is gctype = 2. 2697 ms — total time.
With new Buffer(128 * 1024) objects, about 73% of the total time is spent on GC (full): 4957 ms out of 6782 ms.
With new Buffer(1024 * 1024) objects, about 84% of the total time is spent on GC (full): 34656 ms out of 41273 ms.

With an old v8 version (3.30.33.16), d8 --trace-gc -e 'var x; for (var i = 0; i < 100000; i++) { x = JSON.parse(JSON.stringify({foo: false})); }' and d8 --trace-gc -e 'var x; for (var i = 0; i < 100000; i++) { x = new Uint8Array(1 * 1024); }' result in scavenges (gctype=1), while d8 --trace-gc -e 'var x; for (var i = 0; i < 100000; i++) { x = new Uint8Array(16 * 1024); }' results in mark-sweeps (gctype=2).
I have to check a recent version, though.

Both large and small, both Buffer and Uint8Array objects are allocated in the external memory space. (Checked in iojs + v8 4.3).

@mscdex mscdex added the buffer Issues and PRs related to the buffer subsystem. label May 11, 2015
@trevnorris
Copy link
Contributor

You're forgetting that new Buffer(n) where n < Buffer.poolSize / 2 is going to be a slice. You'll probably want to use SlowBuffer for more precise testing.

@ChALkeR ChALkeR changed the title Buffer and Uint8Array objects are not collectable by the incremental GC Buffer and Uint8Array objects trigger full GC May 12, 2015
@ChALkeR
Copy link
Member Author

ChALkeR commented May 12, 2015

Well, technically those objects are collected with scavenges, but the problem is that they do not trigger scavenges.

A testcase for that:

'use strict';

var count = 0, limit = 1000000;
var start = process.hrtime();

var something = [];
for (var j = 0; j < 500; j++) something.push(Math.random());

var slices = 0;

function call() {
    var buffer = new Buffer(16 * 1024);
    var bufferx;
    for (var j = 0; j < slices; j++) bufferx = something.slice();
    count++;
    if (count > limit) {
        console.log(slices, process.hrtime(start));
        slices++;
        if (slices > 5) {
            process.exit(0);
        }
        count = 0;
        gc();gc();gc();gc();
        start = process.hrtime();
    }
    setImmediate(call);
}

for (var i = 0; i < 20; i++) {
    call();
}

Results:

0 [ 2, 852972979 ]
1 [ 4, 779651385 ]
2 [ 7, 128449867 ]
3 [ 5, 260006336 ]
4 [ 6, 285203623 ]
5 [ 7, 498485074 ]

You can see that while each increment of slices by one introduces a visible slowdown, at slices = 3 the scavenges threshold (pushed by bufferx) overweights the mark-sweep threshold (pushed by buffer), the GC starts running scavenges instead of mark-sweeps, and those clean up the buffer memory just fine, and do it much faster.

@ChALkeR ChALkeR changed the title Buffer and Uint8Array objects trigger full GC Buffer and Uint8Array objects trigger mark-sweeps instead of scavenges May 12, 2015
@ChALkeR
Copy link
Member Author

ChALkeR commented May 12, 2015

@trevnorris Thanks. This last test uses 16 KiB buffers, it should be fine now.

@Fishrock123 Fishrock123 added the v8 engine Issues and PRs related to the V8 dependency. label May 13, 2015
@ChALkeR
Copy link
Member Author

ChALkeR commented Jun 19, 2015

https://groups.google.com/forum/#!topic/v8-users/7bg5ym8t7KU — topic in the v8-users group, with an updated testcase targeting ArrayBuffer.

@ChALkeR ChALkeR changed the title Buffer and Uint8Array objects trigger mark-sweeps instead of scavenges Buffer and TypedArray-based objects trigger mark-sweeps instead of scavenges Jun 20, 2015
@ChALkeR ChALkeR changed the title Buffer and TypedArray-based objects trigger mark-sweeps instead of scavenges Buffer and ArrayBuffer-based objects trigger mark-sweeps instead of scavenges Jun 20, 2015
@ChALkeR
Copy link
Member Author

ChALkeR commented Jun 20, 2015

Some IRC logs with @bnoordhuis:

<ChALkeR> On a separate but related matter, do you have any idea why don't ArrayBuffer instances trigger scavenges by themselves?
<ChALkeR> Is that done on a purpose?
<bnoordhuis> i think i do. the ArrayBuffer constructor calls v8::Isolate::AdjustAmountOfExternalAllocatedMemory(), which is an input signal to the gc heuristic
<bnoordhuis> in this case, it makes the gc decide to do a mark-sweep instead of just a scavenage because of the large amount of external memory
<ChALkeR> Ah. Thanks. But it looks strange to me that v8 doesn't even try to run scavenges in that testcase.
<ChALkeR> So it looks like AdjustAmountOfExternalAllocatedMemory() pushes only mark-sweeps
<bnoordhuis> it's probably just a bug. mark-sweeps are normally a superset of scavenges, i.e. they do everything scavenges do and more
<bnoordhuis> and yes, AdjustAmountOfExternalAllocatedMemory() always triggers mark-sweeps
<ChALkeR> Is there a simular method to push the scavenges threshold?
<bnoordhuis> not as a method of v8::Isolate, if that is your question
<ChALkeR> Ok, thanks.
<ChALkeR> io.js also uses that for various objects (tls, old Buffer, zlib, extern strings)
<bnoordhuis> yep. and it's a good thing in general
<ChALkeR> Yes, I know. Without it, gc is not aware of externally allocated memory.
<bnoordhuis> yep, exactly. you'd run out of process memory when there is plenty to reclaim
<ChALkeR> And can think that everything is fine and gc doesn't need to be called, while there actual memory usage goes over a GiB =)
<ChALkeR> I have seen that with tls.
<ChALkeR> So the correct solution would be on the v8 side to try to run scavenges first to free the externally allocated memory.
<ChALkeR> That would speed up my micro-benchmarks several times (at least two) =)
<ChALkeR> As I expect.
<ChALkeR> And would lower the total lag a bit.

@jeisinger
Copy link
Contributor

would you mind filing a (v8) bug about which collector gets chosen? I think it should be easy to fix

@jeisinger
Copy link
Contributor

patch here: https://codereview.chromium.org/1201993002 (even though it requests GC in NEW_SPACE, the GC heuristic might do a mark/compact instead, just not always)

@ChALkeR
Copy link
Member Author

ChALkeR commented Jun 22, 2015

@jeisinger Thanks! Will test in a moment.

@ChALkeR
Copy link
Member Author

ChALkeR commented Jun 22, 2015

@jeisinger Does not seem to change anything, at least for the testcase at #2022 (comment) and for the testcases supplied above.

@ChALkeR
Copy link
Member Author

ChALkeR commented Jun 22, 2015

@jeisinger, issue filed: https://code.google.com/p/v8/issues/detail?id=4209

Edit: Ah, here am I, mentioning the wrong person again. Sorry. Fixed =). I should try to copy-pase instead of typing the @ + first letter and letting GitHub auto-complete.

@ChALkeR
Copy link
Member Author

ChALkeR commented Jun 22, 2015

@jeisinger Any reasons why the patch could not work?

@jeisinger
Copy link
Contributor

The patch kinda does what it's supposed to do... if you --trace-gc you'll see that the first GC is now a scavenge.

The benchmark creates external garbage in a busy loop which is kinda unusual behavior. As soon as you enable slightly more realistic behavior (you also create js garbage in the busy loop), the GC heuristic does what it's supposed to do.

@ChALkeR
Copy link
Member Author

ChALkeR commented Jun 23, 2015

@jeisinger Ah.
Was (with v8 4.2):

[4177]       35 ms: Scavenge 2.0 (41.0) -> 1.7 (41.0) MB, 0.8 ms [allocation failure].
[4177]       36 ms: Scavenge 2.0 (41.0) -> 2.0 (41.0) MB, 0.9 ms [allocation failure].
[4177]       73 ms: Scavenge 3.5 (42.0) -> 3.2 (42.0) MB, 0.8 ms [allocation failure].
[4177]      227 ms: Mark-sweep 5.3 (42.0) -> 3.9 (43.0) MB, 7.3 ms [external memory allocation limit reached.] [GC in old space requested].
[4177]      244 ms: Mark-sweep 3.9 (43.0) -> 3.8 (44.0) MB, 17.6 ms [external memory allocation limit reached.] [GC in old space requested].
[4177]      362 ms: Mark-sweep 4.5 (44.0) -> 3.8 (44.0) MB, 6.0 ms [external memory allocation limit reached.] [GC in old space requested].
[4177]      380 ms: Mark-sweep 3.8 (44.0) -> 3.6 (44.0) MB, 17.5 ms [external memory allocation limit reached.] [GC in old space requested].
[4177]      498 ms: Mark-sweep 4.3 (44.0) -> 3.6 (44.0) MB, 5.6 ms [external memory allocation limit reached.] [GC in old space requested].
[4177]      515 ms: Mark-sweep 3.6 (44.0) -> 3.6 (44.0) MB, 16.9 ms [external memory allocation limit reached.] [GC in old space requested].

With patch:

[4183:0x1331cb0]      108 ms: Scavenge 2.0 (38.0) -> 1.8 (38.0) MB, 1.9 ms [allocation failure].
[4183:0x1331cb0]      121 ms: Scavenge 2.0 (38.0) -> 2.0 (39.0) MB, 2.4 ms [allocation failure].
[4183:0x1331cb0]      173 ms: Scavenge 3.9 (39.0) -> 3.6 (40.0) MB, 0.9 ms [allocation failure].
[4183:0x1331cb0]      285 ms: Scavenge 4.9 (40.0) -> 4.2 (40.0) MB, 3.3 ms [external memory allocation limit reached.].
[4183:0x1331cb0]      285 ms: Scavenge 4.2 (40.0) -> 4.2 (41.0) MB, 0.3 ms [external memory allocation limit reached.].
[4183:0x1331cb0]      285 ms: Scavenge 4.2 (41.0) -> 4.2 (41.0) MB, 0.0 ms [external memory allocation limit reached.].
[4183:0x1331cb0]      285 ms: Scavenge 4.2 (41.0) -> 4.2 (41.0) MB, 0.0 ms [external memory allocation limit reached.].
[4183:0x1331cb0]      290 ms: Mark-sweep 4.2 (41.0) -> 3.7 (41.0) MB, 4.3 ms [external memory allocation limit reached.] [promotion limit reached].
[4183:0x1331cb0]      365 ms: Mark-sweep 4.3 (41.0) -> 3.7 (41.0) MB, 5.8 ms [external memory allocation limit reached.] [promotion limit reached].

Yes, it looks like part of the problem is solved, at least external memory allocation limit reached tries to call a scavenge first few times. But I can not observe any real improvement yet.

Though I am not yet sure that this will change things much even for the «more realistic behaviour». It also doesn't look like that the tripping point for the size of sliced arrays changed in the testcase at https://groups.google.com/forum/#!topic/v8-users/7bg5ym8t7KU .

Here is a simplier testcase. How should I change this to be «realistic»?

'use strict';

var count = 0;
function call() {
    var buffer = new ArrayBuffer(100 * 1024 * 1024);
    if (count++ < 100) setTimeout(call, 100);
}
call();

you also create js garbage in the busy loop

What do you mean by that?

@jeisinger
Copy link
Contributor

The main issue here is that the example you gave doesn't do anything. If the optimizer was super smart it could replace it with an empty program. That makes it difficult to optimize for this test case.

@jeisinger
Copy link
Contributor

I had to revert the change again is it resulted in too many scavenges without freeing up enough external memory.

If you have a non-trivial benchmark that is negatively affected by this, I'm happy to look into this again.

@jorangreef
Copy link
Contributor

I am not sure if this helps or is related, but with Node 7 and Node 8 we have been seeing massive jumps in RSS a few days after launching a process.

image

For a week or so (it varies), things run smoothly without fragmentation, and then suddenly RSS just keeps climbing until it hits our max old space size limit which is 32 GB.

This may be anecdotal but it seems that physically rebooting the Ubuntu machine gets things back to normal again for a few days, whereas restarting just the Node process does not.

We have done no releases to cause any change, and at the inflection points in the logs I can only see buffers of 30 MB or so being allocated and then released within a few seconds, every few minutes.

The load patterns are the same every day. It's just that at some point the RSS starts climbing without coming down. It's not an increase in load or anything like that.

@trevnorris
Copy link
Contributor

trevnorris commented Aug 4, 2017

@jorangreef can you provide the v8.getHeapStatistics() after memory has begun to grow?

@jorangreef
Copy link
Contributor

@trevnorris, this is before and after the first inflection point in the graph above:

2017-07-30T19:20:23.898Z Before:
RSS: 12720783360
HeapUsed: 236940240
HeapTotal: 339771392
V8: {"total_heap_size":339771392,"total_heap_size_executable":7864320,"total_physical_size":339220584,"total_available_size":34129662640,"used_heap_size":236945072,"heap_size_limit":34393292800,"malloced_memory":8192,"peak_malloced_memory":10013504,"does_zap_garbage":0}
V8_new_space: {"space_name":"new_space","space_size":33554432,"space_used_size":4248544,"space_available_size":12250144,"physical_space_size":33554376}
V8_old_space: {"space_name":"old_space","space_size":248512512,"space_used_size":182421152,"space_available_size":61745480,"physical_space_size":248208496}
V8_code_space: {"space_name":"code_space","space_size":6291456,"space_used_size":4313728,"space_available_size":1731680,"physical_space_size":6045600}
V8_map_space: {"space_name":"map_space","space_size":5767168,"space_used_size":735504,"space_available_size":2517640,"physical_space_size":5766288}
V8_large_object_space: {"space_name":"large_object_space","space_size":45645824,"space_used_size":45230664,"space_available_size":34051415552,"physical_space_size":45645824}

2017-07-30T19:21:23.944Z After:
RSS: 13598502912
HeapUsed: 1160082232
HeapTotal: 1227390976
V8: {"total_heap_size":1227390976,"total_heap_size_executable":7864320,"total_physical_size":1226775280,"total_available_size":33193884496,"used_heap_size":1160086688,"heap_size_limit":34393292800,"malloced_memory":8192,"peak_malloced_memory":10013504,"does_zap_garbage":0}
V8_new_space: {"space_name":"new_space","space_size":33554432,"space_used_size":11135544,"space_available_size":5363144,"physical_space_size":33554432}
V8_old_space: {"space_name":"old_space","space_size":1136132096,"space_used_size":1098486048,"space_available_size":18245616,"physical_space_size":1135763784}
V8_code_space: {"space_name":"code_space","space_size":6291456,"space_used_size":4496672,"space_available_size":1548480,"physical_space_size":6045600}
V8_map_space: {"space_name":"map_space","space_size":5767168,"space_used_size":742280,"space_available_size":4929144,"physical_space_size":5766288}
V8_large_object_space: {"space_name":"large_object_space","space_size":45645824,"space_used_size":45230664,"space_available_size":33163795968,"physical_space_size":45645824}

@mlippautz
Copy link

We landed proper retaining counters for array buffers in https://chromium-review.googlesource.com/c/519386.

What is left is adding a heuristics on some safe point where we can do a Scavenge (e.g. advancing a page). This might be a good project to start contributing to V8 if somebody feels like. Otherwise, we will land bits and pieces as we have spare cycles.

The memory consumption issue should probably get its own bug as it is not related to triggering Scavenges instead of Mark-Compact GCs.

@jorangreef
Copy link
Contributor

@trevnorris I'm sorry, the chart I provided above a few days ago turned out to be a red herring.

I found the hockey stick jumps in RSS were due to hash table resizing in our storage system (the green line below):

image

@Fishrock123
Copy link
Member

Did anyone ever follow up on this?

@Trott
Copy link
Member

Trott commented Dec 24, 2018

@mlippautz @nodejs/v8 @ChALkeR Is this still an issue in Node.js 11? (I imagine so.)

@ChALkeR
Copy link
Member Author

ChALkeR commented Feb 15, 2019

@Trott Just tested on v11.10, triggering scavenges manually still speeds up my benchmarks.
Upd: same on v12.0.0-nightly201902158e68dc53b3.

@BridgeAR
Copy link
Member

If I am not mistaken, there's a patch upcoming soon that might solve this issue.

The tracking bug is https://bugs.chromium.org/p/v8/issues/detail?id=9701.

@ChALkeR
Copy link
Member Author

ChALkeR commented Sep 23, 2019

@BridgeAR That's great news, thanks! I can test that patch if it lands cleanly on our v8.

@ChALkeR
Copy link
Member Author

ChALkeR commented Dec 14, 2019

This still affects latest master.
The patch does not apply cleanly, needs investigation.

@ChALkeR
Copy link
Member Author

ChALkeR commented Dec 17, 2019

New patch in https://chromium-review.googlesource.com/c/v8/v8/+/1803614 landed cleanly.

master without patch:

$ ./node.master i1671.js
[ 0, 752038399 ] 132.38671875
[ 0, 775638647 ] 137.05859375
[ 0, 612542735 ] 147.8359375
[ 0, 798670672 ] 143.47265625
[ 0, 686932159 ] 147.91015625
[ 0, 652596915 ] 147.9453125
[ 0, 767230859 ] 142.64453125
[ 0, 704772366 ] 140.1171875
[ 0, 721586181 ] 132.13671875
[ 0, 752640443 ] 140.2265625
[ 0, 804869781 ] 140.30078125
[ 0, 797604137 ] 148.34765625
[ 0, 765060783 ] 148.3359375
^C
$ ./node.master --expose-gc i1671.js
[ 0, 510874711 ] 83.9765625
[ 0, 473433967 ] 84.859375
[ 0, 472625270 ] 85.38671875
[ 0, 469358379 ] 87.3359375
[ 0, 475308306 ] 87.3359375
[ 0, 474982840 ] 91.46484375
[ 0, 473101868 ] 91.46875
[ 0, 475998364 ] 91.46875
[ 0, 479278017 ] 91.46875
[ 0, 472223707 ] 93.08984375
^C

master with patch:

$ ./node.patched i1671.js
[ 0, 689388941 ] 100.10546875
[ 0, 692828663 ] 112.4140625
[ 0, 579924797 ] 106.2890625
[ 0, 590019919 ] 116.75
[ 0, 572502033 ] 111.28515625
[ 0, 617208788 ] 110.2109375
[ 0, 580989751 ] 118.20703125
[ 0, 609618953 ] 112.43359375
[ 0, 603448117 ] 119.36328125
^C
$ ./node.patched --expose-gc i1671.js
[ 0, 544357967 ] 74.984375
[ 0, 492740927 ] 85.03515625
[ 0, 537714774 ] 91.72265625
[ 0, 511871085 ] 91.734375
[ 0, 493428551 ] 94.171875
[ 0, 493925924 ] 94.17578125
[ 0, 491575551 ] 98.0859375
[ 0, 497687444 ] 98.08984375
[ 0, 492114640 ] 98.08984375
[ 0, 484245828 ] 98.09765625
[ 0, 487521134 ] 98.1015625
[ 0, 492115625 ] 98.10546875

There seems to be a significant improvement in automatic gc version a slight slowdown in manual gc version. Manual gc is still faster, but it's hand-tweaked for this specific setup here, so it might be hard to reach that.

Will file a PR against node master to run the perf checks. I expect noticeable improvements.

@ChALkeR
Copy link
Member Author

ChALkeR commented Dec 17, 2019

With the patch, all unnecessary mark-sweeps have been replaced with scavenges!

@ChALkeR
Copy link
Member Author

ChALkeR commented Dec 17, 2019

#31007 (or an update to never V8 version with the corresponding commit) might actually close this issue.

@ChALkeR ChALkeR removed the help wanted Issues that need assistance from volunteers or PRs that need help to proceed. label Dec 23, 2019
BridgeAR pushed a commit that referenced this issue Dec 25, 2019
Original commit message:

    [heap] Perform GCs on v8::BackingStore allocation

    This adds heuristics to perform young and full GCs on allocation
    of external ArrayBuffer backing stores.

    Young GCs are performed proactively based on the external backing
    store bytes for the young generation. Full GCs are performed only
    if the allocation fails. Subsequent CLs will add heuristics to
    start incremental full GCs based on the external backing store bytes.

    This will allow us to remove AdjustAmountOfExternalMemory for
    ArrayBuffers.

    Bug: v8:9701, chromium:1008938
    Change-Id: I0e8688f582989518926c38260b5cf14e2ca93f84
    Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/1803614
    Commit-Queue: Ulan Degenbaev <ulan@chromium.org>
    Reviewed-by: Dominik Inführ <dinfuehr@chromium.org>
    Reviewed-by: Hannes Payer <hpayer@chromium.org>
    Cr-Commit-Position: refs/heads/master@{#65480}

PR-URL: #31007
Refs: v8/v8@687d865
Refs: #1671
Reviewed-By: Michaël Zasso <targos@protonmail.com>
Reviewed-By: Gus Caplan <me@gus.host>
Reviewed-By: Rich Trott <rtrott@gmail.com>
Reviewed-By: Anna Henningsen <anna@addaleax.net>
ChALkeR added a commit to ChALkeR/io.js that referenced this issue Dec 26, 2019
Original commit message:

    [heap] Perform GCs on v8::BackingStore allocation

    This adds heuristics to perform young and full GCs on allocation
    of external ArrayBuffer backing stores.

    Young GCs are performed proactively based on the external backing
    store bytes for the young generation. Full GCs are performed only
    if the allocation fails. Subsequent CLs will add heuristics to
    start incremental full GCs based on the external backing store bytes.

    This will allow us to remove AdjustAmountOfExternalMemory for
    ArrayBuffers.

    Bug: v8:9701, chromium:1008938
    Change-Id: I0e8688f582989518926c38260b5cf14e2ca93f84
    Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/1803614
    Commit-Queue: Ulan Degenbaev <ulan@chromium.org>
    Reviewed-by: Dominik Inführ <dinfuehr@chromium.org>
    Reviewed-by: Hannes Payer <hpayer@chromium.org>
    Cr-Commit-Position: refs/heads/master@{#65480}

Refs: v8/v8@687d865
Refs: nodejs#1671
@ChALkeR
Copy link
Member Author

ChALkeR commented Jan 2, 2020

I believe this is now resolved with #31007. 🎉
Will reopen if I will see any major issues with that, but so far I don't see a reason to keep this open.

@ChALkeR ChALkeR closed this as completed Jan 2, 2020
BridgeAR pushed a commit that referenced this issue Jan 3, 2020
Original commit message:

    [heap] Perform GCs on v8::BackingStore allocation

    This adds heuristics to perform young and full GCs on allocation
    of external ArrayBuffer backing stores.

    Young GCs are performed proactively based on the external backing
    store bytes for the young generation. Full GCs are performed only
    if the allocation fails. Subsequent CLs will add heuristics to
    start incremental full GCs based on the external backing store bytes.

    This will allow us to remove AdjustAmountOfExternalMemory for
    ArrayBuffers.

    Bug: v8:9701, chromium:1008938
    Change-Id: I0e8688f582989518926c38260b5cf14e2ca93f84
    Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/1803614
    Commit-Queue: Ulan Degenbaev <ulan@chromium.org>
    Reviewed-by: Dominik Inführ <dinfuehr@chromium.org>
    Reviewed-by: Hannes Payer <hpayer@chromium.org>
    Cr-Commit-Position: refs/heads/master@{#65480}

PR-URL: #31007
Refs: v8/v8@687d865
Refs: #1671
Reviewed-By: Michaël Zasso <targos@protonmail.com>
Reviewed-By: Gus Caplan <me@gus.host>
Reviewed-By: Rich Trott <rtrott@gmail.com>
Reviewed-By: Anna Henningsen <anna@addaleax.net>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
buffer Issues and PRs related to the buffer subsystem. memory Issues and PRs related to the memory management or memory footprint. performance Issues and PRs related to the performance of Node.js. v8 engine Issues and PRs related to the V8 dependency.
Projects
None yet
Development

No branches or pull requests