Skip to content

8335356: Shenandoah: Improve concurrent cleanup locking#20086

Closed
pengxiaolong wants to merge 19 commits intoopenjdk:masterfrom
pengxiaolong:JDK-8335356
Closed

8335356: Shenandoah: Improve concurrent cleanup locking#20086
pengxiaolong wants to merge 19 commits intoopenjdk:masterfrom
pengxiaolong:JDK-8335356

Conversation

@pengxiaolong
Copy link
Copy Markdown

@pengxiaolong pengxiaolong commented Jul 8, 2024

Hi all,
This PR is to improve the usage of heap lock in ShenandoahFreeSet::recycle_trash, the original observation mentioned in the bug should be caused by an uncommitted/reverted change I added when Aleksey and I worked on JDK-8331411. Even the change was not committed, the way ShenandoahFreeSet::recycle_trash using heap lock is still not efficient, we think we should improve it.
With the logs added in this commit 5688ee2, I got some key metrics: average time to acquire heap lock is about 450 ~ 900 ns, average time to recycle one trash regions is about 600ns (if not batched, it is 1000+ns, might be related to branch prediction). The current implementation takes heap lock once for every trash region, assume there are 1000 regions to recycle, the time wasted on acquiring heap lock is more than 0.6ms.

The PR splits the recycling process into two steps: 1. Filter out all the trash regions; 2. recycle the trash regions in batches. I can see some benefits which improve the performance:
1. Less time spent on acquiring heap lock, less contention with mutators/allocators
2.Simpler loops in filtering and batch recycling, presumably benefit CPU branch prediction

Here are some logs from test running h2 benchmark:

TIP with debug log, code link, Average time per region: 2312 ns

[6.013s][info][gc] GC(0) Recycled 0 regions in 58675ns, break down: acquiring lock -> 0, recycling -> 0.
[6.093s][info][gc] GC(0) Recycled 641 regions in 3025757ns, break down: acquiring lock -> 260016, recycling -> 548345.
[9.354s][info][gc] GC(1) Recycled 1 regions in 61793ns, break down: acquiring lock -> 481, recycling -> 1141.
[9.428s][info][gc] GC(1) Recycled 600 regions in 1083206ns, break down: acquiring lock -> 256578, recycling -> 511334.
[12.145s][info][gc] GC(2) Recycled 35 regions in 118390ns, break down: acquiring lock -> 13703, recycling -> 27438.
[12.202s][info][gc] GC(2) Recycled 553 regions in 911747ns, break down: acquiring lock -> 209511, recycling -> 426575.
[15.086s][info][gc] GC(3) Recycled 106 regions in 218396ns, break down: acquiring lock -> 39089, recycling -> 80520.
[15.164s][info][gc] GC(3) Recycled 454 regions in 762128ns, break down: acquiring lock -> 172583, recycling -> 351263.
[18.781s][info][gc] GC(4) Recycled 119 regions in 244275ns, break down: acquiring lock -> 45741, recycling -> 92841.
[18.866s][info][gc] GC(4) Recycled 437 regions in 721638ns, break down: acquiring lock -> 162149, recycling -> 329194.
[20.735s][info][gc] GC(5) Recycled 119 regions in 244292ns, break down: acquiring lock -> 45992, recycling -> 93320.
[20.788s][info][gc] GC(5) Recycled 193 regions in 364782ns, break down: acquiring lock -> 73267, recycling -> 149695.
[21.699s][info][gc] GC(6) Recycled 0 regions in 92333ns, break down: acquiring lock -> 0, recycling -> 0.
[21.856s][info][gc] GC(6) Recycled 552 regions in 1372852ns, break down: acquiring lock -> 302017, recycling -> 621198.
[22.196s][info][gc] GC(7) Recycled 0 regions in 80586ns, break down: acquiring lock -> 0, recycling -> 0.
[22.361s][info][gc] GC(7) Recycled 531 regions in 1433166ns, break down: acquiring lock -> 365550, recycling -> 632302.
[22.720s][info][gc] GC(8) Recycled 0 regions in 74306ns, break down: acquiring lock -> 0, recycling -> 0.
[22.898s][info][gc] GC(8) Recycled 530 regions in 1331485ns, break down: acquiring lock -> 299571, recycling -> 620722.
[23.147s][info][gc] GC(9) Recycled 0 regions in 77732ns, break down: acquiring lock -> 0, recycling -> 0.
[23.331s][info][gc] GC(9) Recycled 531 regions in 1361709ns, break down: acquiring lock -> 311774, recycling -> 653233.
[24.257s][info][gc] GC(10) Recycled 0 regions in 62440ns, break down: acquiring lock -> 0, recycling -> 0.
[24.460s][info][gc] GC(10) Recycled 1480 regions in 3611397ns, break down: acquiring lock -> 906760, recycling -> 1729345.
[25.407s][info][gc] GC(11) Recycled 1 regions in 61455ns, break down: acquiring lock -> 585, recycling -> 1362.
[25.597s][info][gc] GC(11) Recycled 1438 regions in 2313578ns, break down: acquiring lock -> 546099, recycling -> 1126582.

Optimized, but w/o batching optimization, basically recycle all trash with one single lock acquirement , code link, Average time per region: 560 ns

[6.097s][info][gc] GC(0) Recycled 641 regions in 280280ns, break down: filtering -> 20216, taking heap lock -> 569, recycling -> 259495.
[9.568s][info][gc] GC(1) Recycled 1 regions in 10592ns, break down: filtering -> 8695, taking heap lock -> 568, recycling -> 1329.
[9.643s][info][gc] GC(1) Recycled 600 regions in 260624ns, break down: filtering -> 8023, taking heap lock -> 736, recycling -> 251865.
[12.648s][info][gc] GC(2) Recycled 34 regions in 24651ns, break down: filtering -> 9739, taking heap lock -> 524, recycling -> 14388.
[12.706s][info][gc] GC(2) Recycled 552 regions in 252231ns, break down: filtering -> 26613, taking heap lock -> 624, recycling -> 224994.
[15.579s][info][gc] GC(3) Recycled 102 regions in 50140ns, break down: filtering -> 7846, taking heap lock -> 550, recycling -> 41744.
[15.662s][info][gc] GC(3) Recycled 461 regions in 187735ns, break down: filtering -> 8851, taking heap lock -> 479, recycling -> 178405.
[19.269s][info][gc] GC(4) Recycled 117 regions in 55504ns, break down: filtering -> 8709, taking heap lock -> 548, recycling -> 46247.
[19.360s][info][gc] GC(4) Recycled 437 regions in 187981ns, break down: filtering -> 9582, taking heap lock -> 505, recycling -> 177894.
[21.269s][info][gc] GC(5) Recycled 124 regions in 57666ns, break down: filtering -> 8986, taking heap lock -> 537, recycling -> 48143.
[21.327s][info][gc] GC(5) Recycled 190 regions in 85890ns, break down: filtering -> 7768, taking heap lock -> 494, recycling -> 77628.
[22.367s][info][gc] GC(6) Recycled 547 regions in 378074ns, break down: filtering -> 11634, taking heap lock -> 714, recycling -> 365726.
[22.733s][info][gc] GC(7) Recycled 2 regions in 27277ns, break down: filtering -> 24172, taking heap lock -> 741, recycling -> 2364.
[22.895s][info][gc] GC(7) Recycled 533 regions in 339216ns, break down: filtering -> 12006, taking heap lock -> 778, recycling -> 326432.
[23.213s][info][gc] GC(8) Recycled 1 regions in 28104ns, break down: filtering -> 25781, taking heap lock -> 722, recycling -> 1601.
[23.393s][info][gc] GC(8) Recycled 529 regions in 341289ns, break down: filtering -> 10257, taking heap lock -> 682, recycling -> 330350.
[23.861s][info][gc] GC(9) Recycled 523 regions in 339914ns, break down: filtering -> 12715, taking heap lock -> 746, recycling -> 326453.
[25.120s][info][gc] GC(10) Recycled 1515 regions in 1148153ns, break down: filtering -> 13906, taking heap lock -> 739, recycling -> 1133508.
[25.873s][info][gc] GC(11) Recycled 1 regions in 13233ns, break down: filtering -> 11375, taking heap lock -> 568, recycling -> 1290.
[26.109s][info][gc] GC(11) Recycled 1237 regions in 493244ns, break down: filtering -> 12178, taking heap lock -> 557, recycling -> 480509.

With batch size of 128, code link, Average time per region: 533 ns

[6.066s][info][gc] GC(0) Recycled 641 regions in 290048ns, break down: filtering -> 20937, recycling -> 257691, yields -> 6088.
[9.514s][info][gc] GC(1) Recycled 1 regions in 12863ns, break down: filtering -> 9186, recycling -> 1255, yields -> 1487.
[9.591s][info][gc] GC(1) Recycled 601 regions in 285321ns, break down: filtering -> 11941, recycling -> 265095, yields -> 5540.
[12.590s][info][gc] GC(2) Recycled 35 regions in 27005ns, break down: filtering -> 9873, recycling -> 14893, yields -> 1341.
[12.650s][info][gc] GC(2) Recycled 551 regions in 231127ns, break down: filtering -> 10833, recycling -> 212840, yields -> 5054.
[15.504s][info][gc] GC(3) Recycled 101 regions in 54762ns, break down: filtering -> 9759, recycling -> 42579, yields -> 1500.
[15.591s][info][gc] GC(3) Recycled 466 regions in 197675ns, break down: filtering -> 9672, recycling -> 181928, yields -> 4095.
[19.231s][info][gc] GC(4) Recycled 121 regions in 58985ns, break down: filtering -> 8601, recycling -> 47931, yields -> 1565.
[19.322s][info][gc] GC(4) Recycled 439 regions in 186173ns, break down: filtering -> 9754, recycling -> 170179, yields -> 4269.
[21.204s][info][gc] GC(5) Recycled 120 regions in 59352ns, break down: filtering -> 8120, recycling -> 48746, yields -> 1602.
[21.257s][info][gc] GC(5) Recycled 191 regions in 89995ns, break down: filtering -> 8695, recycling -> 77421, yields -> 2596.
[22.291s][info][gc] GC(6) Recycled 550 regions in 344021ns, break down: filtering -> 7845, recycling -> 325227, yields -> 7251.
[22.789s][info][gc] GC(7) Recycled 535 regions in 352193ns, break down: filtering -> 11420, recycling -> 328723, yields -> 8067.
[23.265s][info][gc] GC(8) Recycled 530 regions in 344795ns, break down: filtering -> 12356, recycling -> 321523, yields -> 7389.
[23.731s][info][gc] GC(9) Recycled 526 regions in 268314ns, break down: filtering -> 11727, recycling -> 248182, yields -> 5403.
[24.913s][info][gc] GC(10) Recycled 1520 regions in 1035594ns, break down: filtering -> 15586, recycling -> 995272, yields -> 16585.
[25.827s][info][gc] GC(11) Recycled 1 regions in 12540ns, break down: filtering -> 9191, recycling -> 1162, yields -> 1286.
[25.997s][info][gc] GC(11) Recycled 1406 regions in 593511ns, break down: filtering -> 11703, recycling -> 566761, yields -> 10335.

Batch with timed lock up to 30us, PR version, Average time per region: 1118 ns

[6.103s][info][gc] GC(0) Recycled 0 regions in 9421ns with 0 batches.
[6.189s][info][gc] GC(0) Recycled 641 regions in 570953ns with 18 batches.
[9.481s][info][gc] GC(1) Recycled 1 regions in 11745ns with 1 batches.
[9.552s][info][gc] GC(1) Recycled 597 regions in 495402ns with 16 batches.
[12.295s][info][gc] GC(2) Recycled 35 regions in 36924ns with 1 batches.
[12.353s][info][gc] GC(2) Recycled 546 regions in 443037ns with 14 batches.
[15.226s][info][gc] GC(3) Recycled 100 regions in 92537ns with 3 batches.
[15.310s][info][gc] GC(3) Recycled 463 regions in 423945ns with 14 batches.
[19.031s][info][gc] GC(4) Recycled 118 regions in 107655ns with 4 batches.
[19.121s][info][gc] GC(4) Recycled 440 regions in 359861ns with 12 batches.
[21.094s][info][gc] GC(5) Recycled 125 regions in 108325ns with 4 batches.
[21.155s][info][gc] GC(5) Recycled 191 regions in 192925ns with 6 batches.
[22.038s][info][gc] GC(6) Recycled 0 regions in 23493ns with 0 batches.
[22.213s][info][gc] GC(6) Recycled 574 regions in 748833ns with 23 batches.
[22.548s][info][gc] GC(7) Recycled 1 regions in 24498ns with 1 batches.
[22.716s][info][gc] GC(7) Recycled 532 regions in 684182ns with 21 batches.
[23.232s][info][gc] GC(8) Recycled 1 regions in 14411ns with 1 batches.
[23.455s][info][gc] GC(8) Recycled 528 regions in 715436ns with 22 batches.
[23.766s][info][gc] GC(9) Recycled 0 regions in 12247ns with 0 batches.
[23.982s][info][gc] GC(9) Recycled 703 regions in 685842ns with 22 batches.
[24.557s][info][gc] GC(10) Recycled 1 regions in 15890ns with 1 batches.
[24.760s][info][gc] GC(10) Recycled 1142 regions in 1506585ns with 47 batches.
[25.524s][info][gc] GC(11) Recycled 1 regions in 12424ns with 1 batches.
[25.731s][info][gc] GC(11) Recycled 1262 regions in 1695506ns with 52 batches.

Decided on batch with timed lock for following reasons:

  1. We can manage exactly how long it holds the heap lock(set to up to 30us), therefore manage exactly how much it could impact on long tail latencies in the worst case.
  2. Not like static batch size, with timed lock the algorithm is adaptive to different hardwares/runtime, batch size will be automatically adjusted.

Additional test:

  • make clean test TEST=hotspot_gc_shenandoah
==============================
   TEST                                              TOTAL  PASS  FAIL ERROR   
   jtreg:test/hotspot/jtreg:hotspot_gc_shenandoah      261   261     0     0   
==============================
TEST SUCCESS

Progress

  • Change must be properly reviewed (1 review required, with at least 1 Reviewer)
  • Change must not contain extraneous whitespace
  • Commit message must refer to an issue

Issue

  • JDK-8335356: Shenandoah: Improve concurrent cleanup locking (Bug - P4)

Reviewers

Reviewing

Using git

Checkout this PR locally:
$ git fetch https://git.openjdk.org/jdk.git pull/20086/head:pull/20086
$ git checkout pull/20086

Update a local copy of the PR:
$ git checkout pull/20086
$ git pull https://git.openjdk.org/jdk.git pull/20086/head

Using Skara CLI tools

Checkout this PR locally:
$ git pr checkout 20086

View PR using the GUI difftool:
$ git pr show -t 20086

Using diff file

Download this PR as a diff file:
https://git.openjdk.org/jdk/pull/20086.diff

Webrev

Link to Webrev Comment

@bridgekeeper
Copy link
Copy Markdown

bridgekeeper bot commented Jul 8, 2024

👋 Welcome back xpeng! A progress list of the required criteria for merging this PR into master will be added to the body of your pull request. There are additional pull request commands available for use with this pull request.

@openjdk
Copy link
Copy Markdown

openjdk bot commented Jul 8, 2024

@pengxiaolong This change now passes all automated pre-integration checks.

ℹ️ This project also has non-automated pre-integration requirements. Please see the file CONTRIBUTING.md for details.

After integration, the commit message for the final commit will be:

8335356: Shenandoah: Improve concurrent cleanup locking

Reviewed-by: ysr, shade

You can use pull request commands such as /summary, /contributor and /issue to adjust it as needed.

At the time when this comment was updated there had been 24 new commits pushed to the master branch:

  • 66db715: 8335637: Add explicit non-null return value expectations to Object.toString()
  • 7ab96c7: 8335409: Can't allocate and retain memory from resource area in frame::oops_interpreted_do oop closure after 8329665
  • fb66716: 8331725: ubsan: pc may not always be the entry point for a VtableStub
  • fb9a227: 8313909: [JVMCI] assert(cp->tag_at(index).is_unresolved_klass()) in lookupKlassInPool
  • e6c5aa7: 8336012: Fix usages of jtreg-reserved properties
  • e0fb949: 8335779: JFR: Hide sleep events
  • 537d20a: 8335766: Switch case with pattern matching and guard clause compiles inconsistently
  • a44b60c: 8335778: runtime/ClassInitErrors/TestStackOverflowDuringInit.java fails on ppc64 platforms after JDK-8334545
  • b5909ca: 8323242: Remove vestigial DONT_USE_REGISTER_DEFINES
  • dcf4e0d: 8335966: Remove incorrect problem listing of java/lang/instrument/NativeMethodPrefixAgent.java in ProblemList-Virtual.txt
  • ... and 14 more: https://git.openjdk.org/jdk/compare/a9b7f42f29120a3cca0d341350ff03cae485e68b...master

As there are no conflicts, your changes will automatically be rebased on top of these commits when integrating. If you prefer to avoid this automatic rebasing, please check the documentation for the /integrate command for further details.

As you do not have Committer status in this project an existing Committer must agree to sponsor your change. Possible candidates are the reviewers of this PR (@ysramakrishna, @shipilev) but any other Committer may sponsor as well.

➡️ To flag this PR as ready for integration with the above commit message, type /integrate in a new comment. (Afterwards, your sponsor types /sponsor in a new comment to perform the integration).

@openjdk
Copy link
Copy Markdown

openjdk bot commented Jul 8, 2024

@pengxiaolong The following labels will be automatically applied to this pull request:

  • hotspot-gc
  • shenandoah

When this pull request is ready to be reviewed, an "RFR" email will be sent to the corresponding mailing lists. If you would like to change these labels, use the /label pull request command.

@openjdk openjdk bot added hotspot-gc hotspot-gc-dev@openjdk.org shenandoah shenandoah-dev@openjdk.org labels Jul 8, 2024
@pengxiaolong pengxiaolong marked this pull request as ready for review July 9, 2024 23:50
@openjdk openjdk bot added the rfr Pull request is ready for review label Jul 9, 2024
@mlbridge
Copy link
Copy Markdown

mlbridge bot commented Jul 9, 2024

Webrevs

Copy link
Copy Markdown
Member

@ysramakrishna ysramakrishna left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you share any visible changes using the three different schemes (and baseline current) with say SPECjbb or such. Ideally, this affects some user-visible score or latency that we can use as a goodness metric that improves. I am a bit leery of why exactly 30 us, and not say 100 us. Also, I am thinking that a straight count might perform as well and the time-based solution almost seems overengineered to me -- or at least I'd like to see evidence that that engineering effort is worth the resulting bang for a service level metric such as latency or throughput.

@shipilev
Copy link
Copy Markdown
Member

Also, I am thinking that a straight count might perform as well and the time-based solution almost seems overengineered to me -- or at least I'd like to see evidence that that engineering effort is worth the resulting bang for a service level metric such as latency or throughput.

I suggested the time-based approach to Xiaolong to side-step the discussion about the "reasonable" batch size. The good batch size would fluctuate between the machines, heap sizes, region counts. Since we are doing this whole dance to avoid hoarding the lock for a long time to avoid tail latencies increase for allocators waiting for the same lock, it is also more reasonable to just track the time directly here.

This is not to mention that fastdebug builds would zap the unused heap, which makes cleanup orders of magnitude slower, and the large batch sizes would hoard the lock way too much, deviating from the "normal" release behavior. Time-based approach accomodates this as well.

@shipilev
Copy link
Copy Markdown
Member

shipilev commented Jul 10, 2024

Found an easy workload to demonstrate the impact on max latencies on allocation path.

public class TimedAlloc {
        static volatile Object sink;

        public static void main(String... args) throws Throwable {
                for (int c = 0; c < 10; c++) {
                        run();
                } 
        }

        public static void run() {
                long cur = System.nanoTime();
                long end = cur + 3_000_000_000L;

                long sum = 0;
                long max = 0;
                long allocs = 0;

                while (cur < end) {
                        long start = System.nanoTime();
                        sink = new byte[40000];
                        cur = System.nanoTime();
                        long v = (cur - start);
                        sum += v; 
                        max = Math.max(max, v);
                        allocs++;
                }

                System.out.printf("Allocs: %15d; Avg: %8d, Max: %9d%n", allocs, (sum / allocs), max);
        }
}
$ java -Xms30g -Xmx30g -XX:+AlwaysPreTouch -XX:+UseShenandoahGC ../TimedAlloc.java

# Baseline
Allocs:         3294669; Avg:      868, Max:    445998
Allocs:         3372764; Avg:      852, Max:    528294
Allocs:         3342978; Avg:      861, Max:    478060
Allocs:         3341784; Avg:      861, Max:    468640
Allocs:         3341870; Avg:      861, Max:    494377
Allocs:         3342338; Avg:      861, Max:    469976
Allocs:         3340135; Avg:      862, Max:    377933
Allocs:         3341220; Avg:      862, Max:    511117
Allocs:         3341673; Avg:      861, Max:    494394
Allocs:         3341495; Avg:      861, Max:    506392

# Patched
Allocs:         3311013; Avg:      867, Max:     81908
Allocs:         3376562; Avg:      851, Max:     82006
Allocs:         3343815; Avg:      861, Max:     36880
Allocs:         3341872; Avg:      861, Max:     87010
Allocs:         3341824; Avg:      861, Max:     65734
Allocs:         3342571; Avg:      861, Max:    137444
Allocs:         3343636; Avg:      861, Max:     81407
Allocs:         3345381; Avg:      860, Max:     36505
Allocs:         3343388; Avg:      861, Max:     80849
Allocs:         3344334; Avg:      861, Max:     59562

@ysramakrishna
Copy link
Copy Markdown
Member

Impressive and a nice demonstration of the improvements! Benchmarking with HyperAlloc may also be useful or even just SPECjbb may show some non-linear improvements, who knows? May be worth measuring, perhaps? Running the count-based and time-based on a (slow,fast) x (arm,x86) system to fill the matrix would be great, but may be more effort than worthwhile, but just putting it out there. Good data of actual measured improvements always makes me happy, though! :-)

Thanks for the extra effort in collecting the data and sharing it.

Reviewed and approved, thank you!

@openjdk openjdk bot added the ready Pull request is ready to be integrated label Jul 10, 2024
@pengxiaolong
Copy link
Copy Markdown
Author

Thanks a lot @shipilev @ysramakrishna! I'll attach more benchmark result if I get some.

/integrate

@openjdk openjdk bot added the sponsor Pull request is ready to be sponsored label Jul 10, 2024
@openjdk
Copy link
Copy Markdown

openjdk bot commented Jul 10, 2024

@pengxiaolong
Your change (at version dac1ae6) is now ready to be sponsored by a Committer.

@pengxiaolong
Copy link
Copy Markdown
Author

pengxiaolong commented Jul 11, 2024

Based on Aleksey's benchmark, I wrote a very simple benchmark to generate HdrHistogram, run command like below to generate HdrHistogram metrics:

export JAVA_HOME=/home/xlpeng/repos/jdk-xlpeng/optimized-timed-lock
export JAVA_OPTS="-Xms30g -Xmx30g -XX:+AlwaysPreTouch -XX:+UseShenandoahGC"
./build/distributions/allocation-latency/bin/allocation-latency
export JAVA_HOME=/home/xlpeng/repos/jdk-xlpeng/baseline
./build/distributions/allocation-latency/bin/allocation-latency

(hardware: AWS EC2 r7g.4xlarge)

Here is the HdrHistogram:

Histogram

@shipilev
Copy link
Copy Markdown
Member

/sponsor

@openjdk
Copy link
Copy Markdown

openjdk bot commented Jul 11, 2024

Going to push as commit b32e4a6.
Since your change was applied there have been 33 commits pushed to the master branch:

  • 62cbf70: 8336085: Fix simple -Wzero-as-null-pointer-constant warnings in CDS code
  • 2928753: 8324966: Allow selecting jtreg test case by ID from make
  • 1772a92: 8334457: Test javax/swing/JTabbedPane/bug4666224.java fail on macOS with because pressing the ‘C’ key does not switch the layout to WRAP_TAB_LAYOUT
  • b7d0eff: 8207908: JMXStatusTest.java fails assertion intermittently
  • cf940e1: 8335553: [Graal] Compiler thread calls into jdk.internal.vm.VMSupport.decodeAndThrowThrowable and crashes in OOM situation
  • b363de8: 8335946: DTrace code snippets should be generated when DTrace flags are enabled
  • d6c6847: 8335743: jhsdb jstack cannot print some information on the waiting thread
  • cad68e0: 8335935: Chained builders not sending transformed models to next transforms
  • 242f113: 8334481: [JVMCI] add LINK_TO_NATIVE to MethodHandleAccessProvider.IntrinsicMethod
  • 66db715: 8335637: Add explicit non-null return value expectations to Object.toString()
  • ... and 23 more: https://git.openjdk.org/jdk/compare/a9b7f42f29120a3cca0d341350ff03cae485e68b...master

Your commit was automatically rebased without conflicts.

@openjdk openjdk bot added the integrated Pull request has been integrated label Jul 11, 2024
@openjdk openjdk bot closed this Jul 11, 2024
@openjdk openjdk bot removed ready Pull request is ready to be integrated rfr Pull request is ready for review sponsor Pull request is ready to be sponsored labels Jul 11, 2024
@openjdk
Copy link
Copy Markdown

openjdk bot commented Jul 11, 2024

@shipilev @pengxiaolong Pushed as commit b32e4a6.

💡 You may see a message that your pull request was closed with unmerged commits. This can be safely ignored.

@pengxiaolong pengxiaolong deleted the JDK-8335356 branch August 13, 2024 17:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

hotspot-gc hotspot-gc-dev@openjdk.org integrated Pull request has been integrated shenandoah shenandoah-dev@openjdk.org

Development

Successfully merging this pull request may close these issues.

3 participants