Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

8266963: Remove safepoint poll introduced in 8262443 due to reentrance issue #4028

Closed
wants to merge 3 commits into from

Conversation

@linade
Copy link
Contributor

@linade linade commented May 14, 2021

Shenandoah hangs when running specjvm2008 derby. The reason is a Java Thread reenters safepoint/handshake and blocks on itself. Please checkout the bugid for more details. After discussion with @zhengyu123, we think this might not be Shenandoah-specific. I propose to add a check before processing the safepoint/handshake.

An alternative approach (also insight from @zhengyu123) is to move the check a little earlier to the specific place where the Java Thread do ThreadBlockInVM. To feel reassured that no more reentrance exists in other places, I still leave the check in safepoint/handshake as debug code. See master...linade:reentrancecond

I'd appreciate more of your thoughts on these as I understand it could be a rather critical part of the code.


Progress

  • Change must not contain extraneous whitespace
  • Commit message must refer to an issue
  • Change must be properly reviewed

Issue

  • JDK-8266963: Remove safepoint poll introduced in 8262443 due to reentrance issue

Reviewers

Reviewing

Using git

Checkout this PR locally:
$ git fetch https://git.openjdk.java.net/jdk pull/4028/head:pull/4028
$ git checkout pull/4028

Update a local copy of the PR:
$ git checkout pull/4028
$ git pull https://git.openjdk.java.net/jdk pull/4028/head

Using Skara CLI tools

Checkout this PR locally:
$ git pr checkout 4028

View PR using the GUI difftool:
$ git pr show -t 4028

Using diff file

Download this PR as a diff file:
https://git.openjdk.java.net/jdk/pull/4028.diff

@bridgekeeper
Copy link

@bridgekeeper bridgekeeper bot commented May 14, 2021

👋 Welcome back linade! A progress list of the required criteria for merging this PR into master will be added to the body of your pull request. There are additional pull request commands available for use with this pull request.

Loading

@openjdk openjdk bot added the rfr label May 14, 2021
@openjdk
Copy link

@openjdk openjdk bot commented May 14, 2021

@linade The following label will be automatically applied to this pull request:

  • hotspot-runtime

When this pull request is ready to be reviewed, an "RFR" email will be sent to the corresponding mailing list. If you would like to change these labels, use the /label pull request command.

Loading

@mlbridge
Copy link

@mlbridge mlbridge bot commented May 14, 2021

Webrevs

Loading

Copy link
Contributor

@pchilano pchilano left a comment

Hi Yude,

Comments about the issue below.

Thanks,
Patricio

Loading

// should just return because otherwise the thread will probably block on the
// reentrance of the handshake mutex. We also don't need to do anything
// because the process() routine will be retried after the handshake returns.
return;
Copy link
Contributor

@pchilano pchilano May 14, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We cannot do a return here because a safepoint could be already in progress after transitioning out of the blocked state. The handshake would then execute concurrently with the safepoint operation which is not allowed.
We used to have a flag in HandshakeState to avoid these reentrant cases [1], but we removed it after we added the NoSafepointVerifier checks in handshake.cpp. I'm guessing this failed with release bits, otherwise you should have hit the assert in check_possible_safepoint() in ThreadBlockInVM. So unless we also remove the NoSafepointVerifier checks in handshake.cpp bringing that flag back would just solve this issue for release builds. I think the question is then whether it is safe to poll for safepoints inside a handshake closure. Before stackwatermarks maybe there were no issues, but now I don't think so. If ThreadA is executing a handshake on behalf of ThreadB and blocks in ThreadBlockInVM, then a safepoint could happen. After resuming I don't think it is safe for ThreadA to keep poking into ThreadB stack before doing StackWatermarkSet::start_processing() on ThreadB. Maybe @fisk could confirm?
Note that the NoSafepointVerifier checks are also there to prevent requesting a VM operation inside the handshake since that can deadlock too. So even if polling would be fine we would need to keep checking that (not necessarily with NoSafepointVerifier though).

[1]

void process_by_self() {

Loading

Copy link
Contributor Author

@linade linade May 15, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We cannot do a return here because a safepoint could be already in progress after transitioning out of the blocked state. The handshake would then execute concurrently with the safepoint operation which is not allowed.

I didn't get this part. Being able to return means that we are already in another enclosing SafepointMechanism::process_if_requested_slow(). This enclosing SafepointMechanism::process_if_requested_slow() should make sure we are processing the handshake safely, right?

We used to have a flag in HandshakeState to avoid these reentrant cases

I think this flag should also prevent reentrant handshake.

I'm guessing this failed with release bits, otherwise you should have hit the assert in check_possible_safepoint() in ThreadBlockInVM.

It's indeed release build. But fastdebug build miraculously runs without any hang or crash. Maybe it took a different path.

While I was figuring out why the debug build won't crash. I found that the condition i != 0 in do_interpretation (if I understand correctly, it's just a spin count) could be hiding the reentrance problem. If I make the change

diff --git a/src/hotspot/share/oops/generateOopMap.cpp b/src/hotspot/share/oops/generateOopMap.cpp
index 06ae6b0dbaf..8048aa92fc6 100644
--- a/src/hotspot/share/oops/generateOopMap.cpp
+++ b/src/hotspot/share/oops/generateOopMap.cpp
@@ -911,7 +911,7 @@ void GenerateOopMap::do_interpretation(Thread* thread)
 {
   int i = 0;
   do {
-    if (i != 0 && thread->is_Java_thread()) {
+    if (thread->is_Java_thread()) {
       JavaThread* jt = thread->as_Java_thread();
       if (jt->thread_state() == _thread_in_vm) {
         // Since this JavaThread has looped at least once and is _thread_in_vm,

I get

# A fatal error has been detected by the Java Runtime Environment:
#
#  Internal Error (/home/yude.lyd/jdk-master/src/hotspot/share/runtime/mutex.cpp:407), pid=122250, tid=123348
#  assert(false) failed: Attempting to acquire lock tty_lock/3 out of order with lock stack_watermark_lock/2 -- possible deadlock
#
# JRE version: OpenJDK Runtime Environment (17.0) (fastdebug build 17-internal+0-adhoc.yudelyd.jdk-master)
# Java VM: OpenJDK 64-Bit Server VM (fastdebug 17-internal+0-adhoc.yudelyd.jdk-master, mixed mode, sharing, compressed oops, compressed class ptrs, shenandoah gc, linux-amd64)
# Problematic frame:
# V  [libjvm.so+0x1485e50]  Mutex::check_rank(Thread*)+0x120
#
# No core dump will be written. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# If you would like to submit a bug report, please visit:
#   https://bugreport.java.com/bugreport/crash.jsp
#

---------------  S U M M A R Y ------------

Command Line: -Xmx24g -Xms24g -XX:ParallelGCThreads=16 -XX:+UseShenandoahGC -XX:-TieredCompilation -Xlog:gc*=debug,handshake=trace:file=510s.log:tid:filesize=200m SPECjvm2008.jar -ict -coe -i 5 derby

XXXX, Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz, 96 cores, 503G, XXXX
Time: Sat May 15 15:32:28 2021 CST elapsed time: 101.087584 seconds (0d 0h 1m 41s)

---------------  T H R E A D  ---------------

Current thread (0x00007f5ca802ecc0):  JavaThread "BenchmarkThread derby 51" [_thread_in_vm, id=123348, stack(0x00007f5b8b4f5000,0x00007f5b8b5f6000)]

Stack: [0x00007f5b8b4f5000,0x00007f5b8b5f6000],  sp=0x00007f5b8b5f1f20,  free space=1011k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V  [libjvm.so+0x1485e50]  Mutex::check_rank(Thread*)+0x120
V  [libjvm.so+0x14867f1]  Mutex::lock_without_safepoint_check(Thread*)+0x51
V  [libjvm.so+0x1541090]  defaultStream::hold(long)+0xa0
V  [libjvm.so+0x154124a]  defaultStream::write(char const*, unsigned long)+0x2a
V  [libjvm.so+0x153dc30]  outputStream::do_vsnprintf_and_write_with_automatic_buffer(char const*, __va_list_tag*, bool)+0xf0
V  [libjvm.so+0x153e97f]  outputStream::print_cr(char const*, ...)+0x1bf
V  [libjvm.so+0x1981063]  JavaThread::check_possible_safepoint()+0x63
V  [libjvm.so+0xd2f838]  GenerateOopMap::do_interpretation(Thread*)+0x238
V  [libjvm.so+0xd2fe0e]  GenerateOopMap::compute_map(Thread*)+0x3ae
V  [libjvm.so+0x150d243]  OopMapForCacheEntry::compute_map(Thread*)+0x163
V  [libjvm.so+0x150eef5]  OopMapCacheEntry::fill(methodHandle const&, int)+0xf5
V  [libjvm.so+0x150fa40]  OopMapCache::compute_one_oop_map(methodHandle const&, int, InterpreterOopMap*)+0x60
V  [libjvm.so+0x141d876]  Method::mask_for(int, InterpreterOopMap*)+0x96
V  [libjvm.so+0xbd6381]  frame::oops_interpreted_do(OopClosure*, RegisterMap const*, bool) const+0x4c1
V  [libjvm.so+0x185073e]  StackWatermarkFramesIterator::process_one(void*)+0x20e
V  [libjvm.so+0x18515f8]  StackWatermark::process_one()+0x88
V  [libjvm.so+0x18526e9]  StackWatermarkSet::on_iteration(JavaThread*, frame const&)+0x89
V  [libjvm.so+0xbd98ca]  frame::sender(RegisterMap*) const+0x7a
V  [libjvm.so+0x167f988]  check_compiled_frame(JavaThread*)+0x88
V  [libjvm.so+0x168397a]  OptoRuntime::new_instance_C(Klass*, JavaThread*)+0xfa

Loading

@robehn
Copy link
Contributor

@robehn robehn commented May 17, 2021

We can just remove this:

    if (i != 0 && thread->is_Java_thread()) {
      JavaThread* jt = thread->as_Java_thread();
      if (jt->thread_state() == _thread_in_vm) {
        // Since this JavaThread has looped at least once and is _thread_in_vm,
        // we honor any pending blocking request.
        ThreadBlockInVM tbivm(jt);
      }
    }

Under some wired circumstances it can prolong TTS.
But it was me that found it. Reverting it wouldn't get any complains from anyone else.

(we have seen this in experimental code internally, and just remove code which goes to blocked was suggested then also)

Loading

@mlbridge
Copy link

@mlbridge mlbridge bot commented May 17, 2021

Mailing list message from patricio.chilano.mateo at oracle.com on hotspot-runtime-dev:

On 5/15/21 5:04 AM, Yude Lin wrote:

On Fri, 14 May 2021 19:49:10 GMT, Patricio Chilano Mateo <pchilanomate at openjdk.org> wrote:

We cannot do a return here because a safepoint could be already in progress after transitioning out of the blocked state. The handshake would then execute concurrently with the safepoint operation which is not allowed.
I didn't get this part. Being able to return means that we are already in another enclosing SafepointMechanism::process_if_requested_slow(). This enclosing SafepointMechanism::process_if_requested_slow() should make sure we are processing the handshake safely, right?

The issue is that inside the handshake closure you would transition to
the blocked state in ThreadBlockInVM(), which allows a safepoint to
proceed. If in ~ThreadBlockInVM() we don't stop for the safepoint and
just return in SafepointMechanism::process_if_requested_slow() then now
you would have a safepoint and handshake executing at the same time.
If we want to keep the ThreadBlockInVM in
GenerateOopMap::do_interpretation() we need to either avoid calling it
while inside a handshake closure (by moving the check further up as you
try to do in your other version), or we move the check further down to
after honoring the safepoint in SafepointSynchronize::block() (as with
the old flag we used to have). The latter implies answering the question
of whether it is even safe to allow safepoints in the first place and
then resume the handshake, which again I don't think it is. Avoid
polling is a straightforward solution for this issue and goes in line
with the NoSafepointVerifier checks that we use in handshake.cpp.
Or as Robbin pointed out we could back out 8262443 altogether.? : )

It's indeed release build. But fastdebug build miraculously runs without any hang or crash. Maybe it took a different path.

While I was figuring out why the debug build won't crash. I found that the condition `i != 0` in do_interpretation (if I understand correctly, it's just a spin count) could be hiding the reentrance problem. If I make the change

diff --git a/src/hotspot/share/oops/generateOopMap.cpp b/src/hotspot/share/oops/generateOopMap.cpp
index 06ae6b0dbaf..8048aa92fc6 100644
--- a/src/hotspot/share/oops/generateOopMap.cpp
+++ b/src/hotspot/share/oops/generateOopMap.cpp
@@ -911,7 +911,7 @@ void GenerateOopMap::do_interpretation(Thread* thread)
{
int i = 0;
do {
- if (i != 0 && thread->is_Java_thread()) {
+ if (thread->is_Java_thread()) {
JavaThread* jt = thread->as_Java_thread();
if (jt->thread_state() == _thread_in_vm) {
// Since this JavaThread has looped at least once and is _thread_in_vm,

I get

# A fatal error has been detected by the Java Runtime Environment:
#
# Internal Error (/home/yude.lyd/jdk-master/src/hotspot/share/runtime/mutex.cpp:407), pid=122250, tid=123348
# assert(false) failed: Attempting to acquire lock tty_lock/3 out of order with lock stack_watermark_lock/2 -- possible deadlock
#
# JRE version: OpenJDK Runtime Environment (17.0) (fastdebug build 17-internal+0-adhoc.yudelyd.jdk-master)
# Java VM: OpenJDK 64-Bit Server VM (fastdebug 17-internal+0-adhoc.yudelyd.jdk-master, mixed mode, sharing, compressed oops, compressed class ptrs, shenandoah gc, linux-amd64)
# Problematic frame:
# V [libjvm.so+0x1485e50] Mutex::check_rank(Thread*)+0x120
#
# No core dump will be written. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# If you would like to submit a bug report, please visit:
# https://bugreport.java.com/bugreport/crash.jsp
#

Yes, this is a similar issue. We cannot poll for safepoints while
processing a watermark also. The rank check is just a secondary issue of
trying to grab tty_lock while crashing. So backing out 8262443 would
solve that too.

Thanks,
Patricio

Loading

1 similar comment
@mlbridge
Copy link

@mlbridge mlbridge bot commented May 17, 2021

Mailing list message from patricio.chilano.mateo at oracle.com on hotspot-runtime-dev:

On 5/15/21 5:04 AM, Yude Lin wrote:

On Fri, 14 May 2021 19:49:10 GMT, Patricio Chilano Mateo <pchilanomate at openjdk.org> wrote:

We cannot do a return here because a safepoint could be already in progress after transitioning out of the blocked state. The handshake would then execute concurrently with the safepoint operation which is not allowed.
I didn't get this part. Being able to return means that we are already in another enclosing SafepointMechanism::process_if_requested_slow(). This enclosing SafepointMechanism::process_if_requested_slow() should make sure we are processing the handshake safely, right?

The issue is that inside the handshake closure you would transition to
the blocked state in ThreadBlockInVM(), which allows a safepoint to
proceed. If in ~ThreadBlockInVM() we don't stop for the safepoint and
just return in SafepointMechanism::process_if_requested_slow() then now
you would have a safepoint and handshake executing at the same time.
If we want to keep the ThreadBlockInVM in
GenerateOopMap::do_interpretation() we need to either avoid calling it
while inside a handshake closure (by moving the check further up as you
try to do in your other version), or we move the check further down to
after honoring the safepoint in SafepointSynchronize::block() (as with
the old flag we used to have). The latter implies answering the question
of whether it is even safe to allow safepoints in the first place and
then resume the handshake, which again I don't think it is. Avoid
polling is a straightforward solution for this issue and goes in line
with the NoSafepointVerifier checks that we use in handshake.cpp.
Or as Robbin pointed out we could back out 8262443 altogether.? : )

It's indeed release build. But fastdebug build miraculously runs without any hang or crash. Maybe it took a different path.

While I was figuring out why the debug build won't crash. I found that the condition `i != 0` in do_interpretation (if I understand correctly, it's just a spin count) could be hiding the reentrance problem. If I make the change

diff --git a/src/hotspot/share/oops/generateOopMap.cpp b/src/hotspot/share/oops/generateOopMap.cpp
index 06ae6b0dbaf..8048aa92fc6 100644
--- a/src/hotspot/share/oops/generateOopMap.cpp
+++ b/src/hotspot/share/oops/generateOopMap.cpp
@@ -911,7 +911,7 @@ void GenerateOopMap::do_interpretation(Thread* thread)
{
int i = 0;
do {
- if (i != 0 && thread->is_Java_thread()) {
+ if (thread->is_Java_thread()) {
JavaThread* jt = thread->as_Java_thread();
if (jt->thread_state() == _thread_in_vm) {
// Since this JavaThread has looped at least once and is _thread_in_vm,

I get

# A fatal error has been detected by the Java Runtime Environment:
#
# Internal Error (/home/yude.lyd/jdk-master/src/hotspot/share/runtime/mutex.cpp:407), pid=122250, tid=123348
# assert(false) failed: Attempting to acquire lock tty_lock/3 out of order with lock stack_watermark_lock/2 -- possible deadlock
#
# JRE version: OpenJDK Runtime Environment (17.0) (fastdebug build 17-internal+0-adhoc.yudelyd.jdk-master)
# Java VM: OpenJDK 64-Bit Server VM (fastdebug 17-internal+0-adhoc.yudelyd.jdk-master, mixed mode, sharing, compressed oops, compressed class ptrs, shenandoah gc, linux-amd64)
# Problematic frame:
# V [libjvm.so+0x1485e50] Mutex::check_rank(Thread*)+0x120
#
# No core dump will be written. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# If you would like to submit a bug report, please visit:
# https://bugreport.java.com/bugreport/crash.jsp
#

Yes, this is a similar issue. We cannot poll for safepoints while
processing a watermark also. The rank check is just a secondary issue of
trying to grab tty_lock while crashing. So backing out 8262443 would
solve that too.

Thanks,
Patricio

Loading

@linade
Copy link
Contributor Author

@linade linade commented May 18, 2021

We can just remove this:

if (i != 0 && thread->is_Java_thread()) {
  JavaThread* jt = thread->as_Java_thread();
  if (jt->thread_state() == _thread_in_vm) {
    // Since this JavaThread has looped at least once and is _thread_in_vm,
    // we honor any pending blocking request.
    ThreadBlockInVM tbivm(jt);
  }
}

Under some wired circumstances it can prolong TTS.
But it was me that found it. Reverting it wouldn't get any complains from anyone else.

(we have seen this in experimental code internally, and just remove code which goes to blocked was suggested then also)

In that case this is a solution I can get behind.

The issue is that inside the handshake closure you would transition to
the blocked state in ThreadBlockInVM(), which allows a safepoint to
proceed. If in ~ThreadBlockInVM() we don't stop for the safepoint and
just return in SafepointMechanism::process_if_requested_slow() then now
you would have a safepoint and handshake executing at the same time.

Ah I see, there I was thinking a thread is only considered in safepoint by VM Thread when it's blocked in SafepointMechanism::process, but it actually happen when transitioned to the _thread_blocked state. Thanks for pointing it out!

Loading

@robehn
Copy link
Contributor

@robehn robehn commented May 21, 2021

@linade will you re-do the PR with that change instead?

Loading

@linade linade changed the title 8266963: Reentrance condition for safepoint/handshake 8266963: Remove safepoint poll introduced in 8262443 due to reentrance issue May 24, 2021
@linade
Copy link
Contributor Author

@linade linade commented May 24, 2021

@linade will you re-do the PR with that change instead?

Done. Sorry for the delay. Would you take a look?

Loading

robehn
robehn approved these changes May 24, 2021
Copy link
Contributor

@robehn robehn left a comment

Thank you!

Loading

@openjdk
Copy link

@openjdk openjdk bot commented May 24, 2021

@linade This change now passes all automated pre-integration checks.

ℹ️ This project also has non-automated pre-integration requirements. Please see the file CONTRIBUTING.md for details.

After integration, the commit message for the final commit will be:

8266963: Remove safepoint poll introduced in 8262443 due to reentrance issue

Reviewed-by: rehn, zgu, dholmes

You can use pull request commands such as /summary, /contributor and /issue to adjust it as needed.

At the time when this comment was updated there had been 186 new commits pushed to the master branch:

  • a98e476: 8267311: vmTestbase/gc/gctests/StringInternGC/StringInternGC.java eventually OOMEs
  • 5aa45f2: 8267403: tools/jpackage/share/FileAssociationsTest.java#id0 failed with "Error: Bundler "Mac PKG Package" (pkg) failed to produce a package"
  • c20ca42: 8267691: Change table to obsolete CriticalJNINatives in JDK 18, not 17
  • e751b7b: 8267683: rfc7301Grease8F value not displayed correctly in SSLParameters javadoc
  • 0b77359: 8224243: Add implSpec's to AccessibleObject and seal Executable
  • 594d454: 8267574: Dead code in HtmlStyle/HtmlDocletWriter
  • 2ef2450: 8263445: Duplicate key compiler.err.expected.module in compiler.properties
  • cc687fd: 8267575: Add new documentation group in HtmlStyle
  • 5a5b807: 8267633: Clarify documentation of (Doc)TreeScanner
  • 86a8f44: 8267317: Remove DeferredTypeCompleter
  • ... and 176 more: https://git.openjdk.java.net/jdk/compare/06d760283344a1d0fd510aed306e0efb76b51617...master

As there are no conflicts, your changes will automatically be rebased on top of these commits when integrating. If you prefer to avoid this automatic rebasing, please check the documentation for the /integrate command for further details.

As you do not have Committer status in this project an existing Committer must agree to sponsor your change. Possible candidates are the reviewers of this PR (@pchilano, @robehn, @zhengyu123, @dholmes-ora) but any other Committer may sponsor as well.

➡️ To flag this PR as ready for integration with the above commit message, type /integrate in a new comment. (Afterwards, your sponsor types /sponsor in a new comment to perform the integration).

Loading

@openjdk openjdk bot added the ready label May 24, 2021
@linade
Copy link
Contributor Author

@linade linade commented May 24, 2021

/integrate

Loading

@openjdk openjdk bot added the sponsor label May 24, 2021
@openjdk
Copy link

@openjdk openjdk bot commented May 24, 2021

@linade
Your change (at version 9677d41) is now ready to be sponsored by a Committer.

Loading

@dholmes-ora
Copy link
Member

@dholmes-ora dholmes-ora commented May 24, 2021

Hold on a moment! This undoes 8262443, so what happens with the issue that bug was fixing?

David

Loading

@robehn
Copy link
Contributor

@robehn robehn commented May 24, 2021

Hold on a moment! This undoes 8262443, so what happens with the issue that bug was fixing?

As I said above, it's seem like I'm the only one seen an issue with this, so I'm the finder/reporter and fixer of the original bug.
In a ZGC branch they run this code in StackWater, which than also hits this problem.
Since this is just a performance enhancement it was suggested to just remove this.

I don't have time to figure out exactly what happens when we start looping here, so I'm fine with removing this for now.

Thanks, Robbin

David

Loading

Copy link
Contributor

@zhengyu123 zhengyu123 left a comment

Look good, Thanks.

Loading

@mlbridge
Copy link

@mlbridge mlbridge bot commented May 24, 2021

Mailing list message from David Holmes on hotspot-runtime-dev:

On 24/05/2021 7:54 pm, Robbin Ehn wrote:

On Mon, 24 May 2021 09:27:15 GMT, David Holmes <dholmes at openjdk.org> wrote:

Hold on a moment! This undoes 8262443, so what happens with the issue that bug was fixing?

As I said above, it's seem like I'm the only one seen an issue with this, so I'm the finder/reporter and fixer of the original bug.
In a ZGC branch they run this code in StackWater, which than also hits this problem.
Since this is just a performance enhancement it was suggested to just remove this.

I don't have time to figure out exactly what happens when we start looping here, so I'm fine with removing this for now.

In that case 8262443 needs to be updated to explain this.

Thanks,
David

Loading

@linade
Copy link
Contributor Author

@linade linade commented May 25, 2021

I see that Robbin has updated 8262443 (Thanks Robbin). Are we clear to proceed? : )

Loading

Copy link
Member

@dholmes-ora dholmes-ora left a comment

Okay by me.

Thanks,
David

Loading

@linade
Copy link
Contributor Author

@linade linade commented May 26, 2021

May I ask your help to sponsor?

Loading

@robehn
Copy link
Contributor

@robehn robehn commented May 26, 2021

Sorry I don't know how, I think @dholmes-ora and @zhengyu123 have much more experience with that.
Can anyone of you help out?

Loading

@dholmes-ora
Copy link
Member

@dholmes-ora dholmes-ora commented May 26, 2021

/sponsor

Loading

@dholmes-ora
Copy link
Member

@dholmes-ora dholmes-ora commented May 26, 2021

@robehn You just enter /sponsor as a comment. I've done it now.

Loading

@openjdk
Copy link

@openjdk openjdk bot commented May 26, 2021

@dholmes-ora @linade Since your change was applied there have been 196 commits pushed to the master branch:

  • 45e0597: 8264302: Create implementation for Accessibility native peer for Splitpane java role
  • 4343997: 8267708: Remove references to com.sun.tools.javadoc.**
  • f632254: 8267221: jshell feedback is incorrect when creating method with array varargs parameter
  • bf8d4a8: 8267583: jmod fails on symlink to class file
  • 083416d: 8267130: Memory Overflow in Disassembler::load_library
  • 9d305b9: 8252372: Check if cloning is required to move loads out of loops in PhaseIdealLoop::split_if_with_blocks_post()
  • 0394416: 8267468: Rename refill waster counters in ThreadLocalAllocBuffer
  • b33b8bc: 8267750: Incomplete fix for JDK-8267683
  • ac36b7d: 8267452: Delegate forEachRemaining in Spliterators.iterator()
  • d0d2ddc: 8267651: runtime/handshake/HandshakeTimeoutTest.java times out when dumping core
  • ... and 186 more: https://git.openjdk.java.net/jdk/compare/06d760283344a1d0fd510aed306e0efb76b51617...master

Your commit was automatically rebased without conflicts.

Pushed as commit 9c346a1.

💡 You may see a message that your pull request was closed with unmerged commits. This can be safely ignored.

Loading

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
5 participants