-
Notifications
You must be signed in to change notification settings - Fork 6.1k
8280761: UseCompressedOops should be set after limit_heap_by_allocatable_memory #7938
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
👋 Welcome back tkiriyama! A progress list of the required criteria for merging this PR into |
@tkiriyama The following label will be automatically applied to this pull request:
When this pull request is ready to be reviewed, an "RFR" email will be sent to the corresponding mailing list. If you would like to change these labels, use the /label pull request command. |
Webrevs
|
Could anyone review this pull request, please? |
/cc hotspot-gc |
@dholmes-ora |
I cc'd hotspot-gc as they should probably be the ones to look at this for you. |
Hello, I would appreciate it if anyone could review my fix. |
I plan to take a look this week. Sorry for the delay; I was occupied by some other reviewing work last week. |
I noticed that Github Action is not enabled for this PR. Please follow the instructions in https://wiki.openjdk.java.net/display/SKARA/Testing to enable Github Action. Also, I got build errors locally. |
I am very sorry. I enabled Github Action . |
Doesn't look like GA is properly running. It should say sth like "12 successful checks", as shown in #7341.
OK, maybe this is a 32/64-bit thing. Having a working GA would have caught this. |
The code will not compile on anything but 32 bit platforms. Here's a result on linux-x64:
Please fix. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The hotspot change looks good to me, but there's sth unclear to me in the test.
// 1. Verify that MaxRAMPercentage overrides UseCompressedOops Ergo | ||
// 2. Verify that UseCompressedOops forces compressed oops limit even | ||
// when other flags are specified. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand why MaxRAMPercentage
or UseCompressedOops
are relevant to this fix/bug.
args.add("-version"); | ||
|
||
String cmd = ProcessTools.getCommandLine(ProcessTools.createTestJvm(args.toArray(String[]::new))); | ||
ProcessBuilder pb = new ProcessBuilder("sh", "-c", "ulimit -v 10485760;" + cmd); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ulimit -v
parameter should be passed in as well like MaxRAM
. Then, from the calling context, one can see whether coop should be enabled or not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your review.
I don't understand why
MaxRAMPercentage
orUseCompressedOops
are relevant to this fix/bug.
I forgot to remove this comment when I copied form TestMaxRAMFlags.java.
I fixed this comment.
The
ulimit -v
parameter should be passed in as well likeMaxRAM
. Then, from the calling context, one can see whether coop should be enabled or not.
This bug appears when ulimit -v is specified and MaxRAM is set bigger value. So I think it does not need that
ulimit -v parameter let be variable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not about whether it's constant or variable.
In this context, it's unclear why coop is enabled.
// Args: MaxRAM , MaxRAMPercentage, forcecoop, expect coop
checkFlag(32 * oneG, 100, false, true);
checkFlag
should not sneak in logic altering the coop value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks okay.
@tkiriyama This change now passes all automated pre-integration checks. ℹ️ This project also has non-automated pre-integration requirements. Please see the file CONTRIBUTING.md for details. After integration, the commit message for the final commit will be:
You can use pull request commands such as /summary, /contributor and /issue to adjust it as needed. At the time when this comment was updated there had been 281 new commits pushed to the
As there are no conflicts, your changes will automatically be rebased on top of these commits when integrating. If you prefer to avoid this automatic rebasing, please check the documentation for the /integrate command for further details. As you do not have Committer status in this project an existing Committer must agree to sponsor your change. Possible candidates are the reviewers of this PR (@albertnetymk, @tschatzl) but any other Committer may sponsor as well. ➡️ To flag this PR as ready for integration with the above commit message, type |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In summary, the signature I have in mind is sth like checkFlag(long ulimit_v, long max_ram, boolean expected_coop)
.
args.add("-version"); | ||
|
||
String cmd = ProcessTools.getCommandLine(ProcessTools.createTestJvm(args.toArray(String[]::new))); | ||
ProcessBuilder pb = new ProcessBuilder("sh", "-c", "ulimit -v 10485760;" + cmd); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not about whether it's constant or variable.
In this context, it's unclear why coop is enabled.
// Args: MaxRAM , MaxRAMPercentage, forcecoop, expect coop
checkFlag(32 * oneG, 100, false, true);
checkFlag
should not sneak in logic altering the coop value.
// 2. Verify that UseCompressedOops forces compressed oops limit even | ||
// when ulimit -v are specified. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand the first bullet but not the second. Looking at the reproducers in the JBS ticket, UseCompressedOops
should not be provided; it's read only.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like an additional (unrelated) test to make sure it is overridden if necessary imho. It seems a bit forced though because there is no test that actually verifies that UseCompressedOops
is false
later.
I'm good with removing the test case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like an additional (unrelated) test to make sure it is overridden if necessary imho.
That is correct.
Do you mean that the following code should be removed?
checkFlag(128 * oneG, 100, true, true);
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. Removing this test case would limit the testing to the actual problem stated in the CR. Verifying that UseCompressedOops
is enabled correctly is tested somewhere else already, and there is no reason that using ulimit -v
would change that behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand.
I removed this test patern.
|
||
long oneG = 1L * 1024L * 1024L * 1024L; | ||
|
||
// Args: MaxRAM , MaxRAMPercentage, forcecoop, expect coop |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure why MaxRAMPercentage
is needed here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is that the initial value for reasonable_max
is exactly MaxRAM
; you are right that for this test it is not necessary to pass it explicitly, just always set it to 100 percent in the command line.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider that values like MaxRAM
might change in the future (unlikely as it is). Setting it makes following the execution a bit easier imho.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aha, I see. Having MaxRAMPercentage
set explicitly instead of relying on its default value indeed makes the test more resilient. Please add a comment for MaxRAMPercentage
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK.
I added a comment for MaxRAMPercentage.
long oneG = 1L * 1024L * 1024L * 1024L; | ||
|
||
// Args: MaxRAM , MaxRAMPercentage, expect coop | ||
// Having MaxRAMPercentage set explicitly instead of relying on its default value indeed makes the test more resilient. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Slight rewording, // Setting MaxRAMPercentage explicitly to make the test more resilient.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the revision. Embedding ulimit -v
inside checkFlag
still hinders readability, IMO.
What I have in mind is:
// ulimit, max_ram, max_ram_percent, expected_coop
checkFlag(5 * oneG, 32 * oneG, 100, true);
Then, it's clear in this context why coop is enabled.
Thank you, I fixed this comment.
Sounds good. I fixed this test as you advised. |
Need sth like this to resolve the failure. // Convert bytes to kbytes for ulimit -v
var ulimit_prefix = "ulimit -v " + (ulimit / 1024);
...
ProcessBuilder pb = new ProcessBuilder("sh", "-c", ulimit_prefix + ";" + cmd); |
I'm sorry. This is my mistake. I added your code. |
I appreciate all reviews. I hope this change is integrated. |
/integrate |
@tkiriyama |
/sponsor |
Going to push as commit 8d96ab0.
Your commit was automatically rebased without conflicts. |
@albertnetymk @tkiriyama Pushed as commit 8d96ab0. 💡 You may see a message that your pull request was closed with unmerged commits. This can be safely ignored. |
I fixed to set UseCompressedOops flag after limit_heap_by_allocatable_memory().
So when ulimit -v is called and -XX:MaxRAM is set, UseCompressedOops does not become false.
And all hotspot tier1 test are passed.
Would you please review this fix?
Progress
Issue
Reviewers
Reviewing
Using
git
Checkout this PR locally:
$ git fetch https://git.openjdk.java.net/jdk pull/7938/head:pull/7938
$ git checkout pull/7938
Update a local copy of the PR:
$ git checkout pull/7938
$ git pull https://git.openjdk.java.net/jdk pull/7938/head
Using Skara CLI tools
Checkout this PR locally:
$ git pr checkout 7938
View PR using the GUI difftool:
$ git pr show -t 7938
Using diff file
Download this PR as a diff file:
https://git.openjdk.java.net/jdk/pull/7938.diff