-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
8331194: NPE in ArrayCreationTree.java with -XX:-UseCompressedOops #20087
Conversation
👋 Welcome back cslucas! A progress list of the required criteria for merging this PR into |
@JohnTortugo This change now passes all automated pre-integration checks. ℹ️ This project also has non-automated pre-integration requirements. Please see the file CONTRIBUTING.md for details. After integration, the commit message for the final commit will be:
You can use pull request commands such as /summary, /contributor and /issue to adjust it as needed. At the time when this comment was updated there had been 137 new commits pushed to the
As there are no conflicts, your changes will automatically be rebased on top of these commits when integrating. If you prefer to avoid this automatic rebasing, please check the documentation for the /integrate command for further details. As you do not have Committer status in this project an existing Committer must agree to sponsor your change. Possible candidates are the reviewers of this PR (@vnkozlov) but any other Committer may sponsor as well. ➡️ To flag this PR as ready for integration with the above commit message, type |
@JohnTortugo The following label will be automatically applied to this pull request:
When this pull request is ready to be reviewed, an "RFR" email will be sent to the corresponding mailing list. If you would like to change these labels, use the /label pull request command. |
Webrevs
|
test/hotspot/jtreg/compiler/c2/TestReduceAllocationAndNestedScalarized.java
Outdated
Show resolved
Hide resolved
Co-authored-by: Emanuel Peter <emanuel.peter@oracle.com>
Co-authored-by: Emanuel Peter <emanuel.peter@oracle.com>
The About flags: I would just have a second run without any flags. |
You can use I also suggest put test into corresponding directory |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The fix looks fine. I have few comments.
@vnkozlov @eme64 - I moved all tests related to "RAM" to the suggested folder and modified the test, added in this PR, to remove the flags as you suggested. Please, let me ask more about configuring the test to run without flags. Without the flags, for instance, the CompileCommands, the IR graph will very likely not be in the shape that trigger the problem. Even with the CompileCommands, if some other flags included in the list of flags to run the test, the IR graph may also not be in the shape that will trigger the problem. Why run the test if it doesn't trigger the problem it was intended for? I'm probably missing something here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please, reverse movement of old tests. We need to backport this fix into JDK 23 and for that we need only related test.
You can move them in separate RFE in JDK 24 (current mainline).
We hope to catch other issues with each new test. We are running with variety of flags combinations set by testing environment and a test may fail for different reason. It is all about code shape variations and flags combinations. |
I reverted the move. I had forgot about the backport. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good.
test/hotspot/jtreg/compiler/escapeAnalysis/TestReduceAllocationAndNestedScalarized.java
Outdated
Show resolved
Hide resolved
I will run testing with latest version. Tobias ran with version 0 and all passed. Since then there were only cosmetic changes but we still need to verify it. |
Sounds good. Thank you! |
My testing passed. @JohnTortugo, can you check why only 5 tests ran in GHA testing? |
@vnkozlov - I'm looking into that and I'll update here. |
@vnkozlov - I don't know why some of the checks didn't trigger automatically. It's still under investigation. I manually triggered them, and they all passed. |
/integrate |
@JohnTortugo |
@eme64 I you fine with current version? |
test/hotspot/jtreg/compiler/escapeAnalysis/TestReduceAllocationAndNestedScalarized.java
Outdated
Show resolved
Hide resolved
continue; | ||
} | ||
|
||
ObjectValue* other = (ObjectValue*) sv_for_node_id(objs, n->_idx); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the cast here necessary? I see them generally in the file... but not sure why.
ObjectValue*
PhaseOutput::sv_for_node_id(GrowableArray<ScopeValue*> *objs, int id) {
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because objs
array may contains 3 types of objects: ScopeValue
, ObjectValue
and ObjectMergeValue
.
I would leave this code as it is for this PR but suggest to file followup RFE to clean this up.
Instead of such casts we should use as_ObjectValue()
and as_ObjectMergeValue()
which have asserts to check type.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @eme64 , @vnkozlov for reviewing. I created this RFE to remove the unnecessary casts: https://bugs.openjdk.org/browse/JDK-8336495
@vnkozlov @JohnTortugo I'm not familiar enough with this part of the code, just gave some code style and testing suggestions. I was given something else that has higher priority, so maybe some one else can give the VM changes a review? |
…nAndNestedScalarized.java Co-authored-by: Emanuel Peter <emanuel.peter@oracle.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update is good.
/integrate |
@JohnTortugo |
/sponsor |
Going to push as commit 005fb67.
Your commit was automatically rebased without conflicts. |
@vnkozlov @JohnTortugo Pushed as commit 005fb67. 💡 You may see a message that your pull request was closed with unmerged commits. This can be safely ignored. |
/backport :jdk23 |
@TobiHartmann the backport was successfully created on the branch backport-TobiHartmann-005fb67e-jdk23 in my personal fork of openjdk/jdk. To create a pull request with this backport targeting openjdk/jdk:jdk23, just click the following link: The title of the pull request is automatically filled in correctly and below you find a suggestion for the pull request body:
If you need to update the source branch of the pull then run the following commands in a local clone of your personal fork of openjdk/jdk:
|
Please, review this PR to fix issue in debug information serialization encountered when RAM reduces a Phi for which one of the inputs is an object already scalar replaced.
Details:
Consider class
Picture
that has two reference fields,first
andsecond
of typePoint
. In a random method in the application an objectobj
of this class is created, and the fields of this object are initialized such thatfirst
is assigned a new object whereassecond
receives the output of aPhi
node merging the object assigned tofirst
and some other allocation. Also, assumesobj
is used as debug information in anuncommon_trap
and none of these objects escapes. I.e., we have a scenario like this:After one iteration of EA+SR, Allocation
A
will be scalar replaced and debug information in<Trap>
adjusted accordingly. The description of fieldsecond
in the debug information on<Trap>
will, however, still involve aPhi
node between allocationB
andC
. In the next iteration of EA+SR thePhi
node for fieldsecond
will be reduced by RAM and debug information will be adjusted accordingly. So far nothing is wrong.The issue happens because the existing code in
Process_OopMap_Node
to serialize debug information was missing a check. Simply, the check for settingis_root
of anObjectValue
wasn't checking that theObjectValue
might be a description of a field of a scalar replaced object. It was only checking whetherObjectValue
was a local, stack or monitor. Consequently, the allocation assigned toobj.first
(yes,first
) was not being marked as root.But the issue only manifested if the
<trap>
was exercised AND the result of<cond>
was true. If the result of<cond>
wasfalse
when the trap was exercised, then no problem would happen. The reason is, when<cond>
is true theselect
method in ObjectMergeValue would flag, correctly, that the allocation inside theif
needs to be rematerialized and the other input of the ObjectMergeValue shouldn't be rematerialized because_is_root == false
, meaning it's just a candidate for rematerialization.Fixing the check for setting
_is_root
solved the problem. The-XX:-UseCompressedOops
wasn't directly related to the problem, it just causedDecodeN
,EncodeP
nodes to not show up in the graph and RAM consider the Phi for reduction.Progress
Issue
Reviewers
Reviewing
Using
git
Checkout this PR locally:
$ git fetch https://git.openjdk.org/jdk.git pull/20087/head:pull/20087
$ git checkout pull/20087
Update a local copy of the PR:
$ git checkout pull/20087
$ git pull https://git.openjdk.org/jdk.git pull/20087/head
Using Skara CLI tools
Checkout this PR locally:
$ git pr checkout 20087
View PR using the GUI difftool:
$ git pr show -t 20087
Using diff file
Download this PR as a diff file:
https://git.openjdk.org/jdk/pull/20087.diff
Webrev
Link to Webrev Comment