Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

8276673: Optimize abs operations in C2 compiler #6755

Closed
wants to merge 7 commits into from

Conversation

fg1417
Copy link

@fg1417 fg1417 commented Dec 8, 2021

The patch aims to help optimize Math.abs() mainly from these three parts:

  1. Remove redundant instructions for abs with constant values
  2. Remove redundant instructions for abs with char type
  3. Convert some common abs operations to ideal forms
  1. Remove redundant instructions for abs with constant values

If we can decide the value of the input node for function Math.abs()
at compile-time, we can substitute the Abs node with the absolute
value of the constant and don't have to calculate it at runtime.

For example,
int[] a
for (int i = 0; i < SIZE; i++) {
a[i] = Math.abs(-38);
}

Before the patch, the generated code for the testcase above is:
...
mov w10, #0xffffffda
cmp w10, wzr
cneg w17, w10, lt
dup v16.8h, w17
...
After the patch, the generated code for the testcase above is :
...
movi v16.4s, #0x26
...

  1. Remove redundant instructions for abs with char type

In Java semantics, as the char type is always non-negative, we
could actually remove the absI node in the C2 middle end.

As for vectorization part, in current SLP, the vectorization of
Math.abs() with char type is intentionally disabled after
JDK-8261022 because it generates incorrect result before. After
removing the AbsI node in the middle end, Math.abs(char) can be
vectorized naturally.

For example,

char[] a;
char[] b;
for (int i = 0; i < SIZE; i++) {
b[i] = (char) Math.abs(a[i]);
}

Before the patch, the generated assembly code for the testcase
above is:

B15:
add x13, x21, w20, sxtw #1
ldrh w11, [x13, #16]
cmp w11, wzr
cneg w10, w11, lt
strh w10, [x13, #16]
ldrh w10, [x13, #18]
cmp w10, wzr
cneg w10, w10, lt
strh w10, [x13, #18]
...
add w20, w20, #0x1
cmp w20, w17
b.lt B15

After the patch, the generated assembly code is:
B15:
sbfiz x18, x19, #1, #32
add x0, x14, x18
ldr q16, [x0, #16]
add x18, x21, x18
str q16, [x18, #16]
ldr q16, [x0, #32]
str q16, [x18, #32]
...
add w19, w19, #0x40
cmp w19, w17
b.lt B15

  1. Convert some common abs operations to ideal forms

The patch overrides some virtual support functions for AbsNode
so that optimization of gvn can work on it. Here are the optimizable
forms:

a) abs(0 - x) => abs(x)

Before the patch:
...
ldr w13, [x13, #16]
neg w13, w13
cmp w13, wzr
cneg w14, w13, lt
...
After the patch:
...
ldr w13, [x13, #16]
cmp w13, wzr
cneg w13, w13, lt
...

b) abs(abs(x)) => abs(x)

Before the patch:
...
ldr w12, [x12, #16]
cmp w12, wzr
cneg w12, w12, lt
cmp w12, wzr
cneg w12, w12, lt
...
After the patch:
...
ldr w13, [x13, #16]
cmp w13, wzr
cneg w13, w13, lt
...


Progress

  • Change must not contain extraneous whitespace
  • Commit message must refer to an issue
  • Change must be properly reviewed

Issue

Reviewers

Reviewing

Using git

Checkout this PR locally:
$ git fetch https://git.openjdk.java.net/jdk pull/6755/head:pull/6755
$ git checkout pull/6755

Update a local copy of the PR:
$ git checkout pull/6755
$ git pull https://git.openjdk.java.net/jdk pull/6755/head

Using Skara CLI tools

Checkout this PR locally:
$ git pr checkout 6755

View PR using the GUI difftool:
$ git pr show -t 6755

Using diff file

Download this PR as a diff file:
https://git.openjdk.java.net/jdk/pull/6755.diff

@bridgekeeper
Copy link

bridgekeeper bot commented Dec 8, 2021

👋 Welcome back fgao! A progress list of the required criteria for merging this PR into master will be added to the body of your pull request. There are additional pull request commands available for use with this pull request.

@openjdk openjdk bot added the rfr Pull request is ready for review label Dec 8, 2021
@openjdk
Copy link

openjdk bot commented Dec 8, 2021

@fg1417 The following label will be automatically applied to this pull request:

  • hotspot-compiler

When this pull request is ready to be reviewed, an "RFR" email will be sent to the corresponding mailing list. If you would like to change these labels, use the /label pull request command.

@openjdk openjdk bot added the hotspot-compiler hotspot-compiler-dev@openjdk.org label Dec 8, 2021
@mlbridge
Copy link

mlbridge bot commented Dec 8, 2021

Webrevs

The patch aims to help optimize Math.abs() mainly from these three parts:
1) Remove redundant instructions for abs with constant values
2) Remove redundant instructions for abs with char type
3) Convert some common abs operations to ideal forms

1. Remove redundant instructions for abs with constant values

If we can decide the value of the input node for function Math.abs()
at compile-time, we can substitute the Abs node with the absolute
value of the constant and don't have to calculate it at runtime.

For example,
  int[] a
  for (int i = 0; i < SIZE; i++) {
    a[i] = Math.abs(-38);
  }

Before the patch, the generated code for the testcase above is:
...
  mov   w10, #0xffffffda
  cmp   w10, wzr
  cneg  w17, w10, lt
  dup   v16.8h, w17
...
After the patch, the generated code for the testcase above is :
...
  movi  v16.4s, #0x26
...

2. Remove redundant instructions for abs with char type

In Java semantics, as the char type is always non-negative, we
could actually remove the absI node in the C2 middle end.

As for vectorization part, in current SLP, the vectorization of
Math.abs() with char type is intentionally disabled after
JDK-8261022 because it generates incorrect result before. After
removing the AbsI node in the middle end, Math.abs(char) can be
vectorized naturally.

For example,

  char[] a;
  char[] b;
  for (int i = 0; i < SIZE; i++) {
    b[i] = (char) Math.abs(a[i]);
  }

Before the patch, the generated assembly code for the testcase
above is:

B15:
  add   x13, x21, w20, sxtw openjdk#1
  ldrh  w11, [x13, openjdk#16]
  cmp   w11, wzr
  cneg  w10, w11, lt
  strh  w10, [x13, openjdk#16]
  ldrh  w10, [x13, openjdk#18]
  cmp   w10, wzr
  cneg  w10, w10, lt
  strh  w10, [x13, openjdk#18]
  ...
  add   w20, w20, #0x1
  cmp   w20, w17
  b.lt  B15

After the patch, the generated assembly code is:
B15:
  sbfiz x18, x19, openjdk#1, openjdk#32
  add   x0, x14, x18
  ldr   q16, [x0, openjdk#16]
  add   x18, x21, x18
  str   q16, [x18, openjdk#16]
  ldr   q16, [x0, openjdk#32]
  str   q16, [x18, openjdk#32]
  ...
  add   w19, w19, #0x40
  cmp   w19, w17
  b.lt  B15

3. Convert some common abs operations to ideal forms

The patch overrides some virtual support functions for AbsNode
so that optimization of gvn can work on it. Here are the optimizable
forms:

a) abs(0 - x) => abs(x)

Before the patch:
  ...
  ldr   w13, [x13, openjdk#16]
  neg   w13, w13
  cmp   w13, wzr
  cneg  w14, w13, lt
  ...
After the patch:
  ...
  ldr   w13, [x13, openjdk#16]
  cmp   w13, wzr
  cneg  w13, w13, lt
  ...

b) abs(abs(x))  => abs(x)

Before the patch:
  ...
  ldr   w12, [x12, openjdk#16]
  cmp   w12, wzr
  cneg  w12, w12, lt
  cmp   w12, wzr
  cneg  w12, w12, lt
  ...
After the patch:
  ...
  ldr   w13, [x13, openjdk#16]
  cmp   w13, wzr
  cneg  w13, w13, lt
  ...

Change-Id: I5434c01a225796caaf07ffbb19983f4fe2e206bd
Change-Id: I5e3898054b75f49653b8c3b37e4f5007675fa963
@fg1417
Copy link
Author

fg1417 commented Dec 13, 2021

The PR optimizes abs operations in the C2 middle end. Can I have your review please?

@DamonFool
Copy link
Member

The PR optimizes abs operations in the C2 middle end. Can I have your review please?

So what's the performance data before and after this patch?
Does it also benefit on x86?

It would be better to provide a jmh micro benchmark.
Thanks.

Fei Gao added 2 commits December 15, 2021 10:30
Change-Id: I71987594e9288a489a04de696e69a62f4ad19357
Change-Id: I64938d543126c2e3f9fad8ffc4a50e25e4473d8f
@fg1417
Copy link
Author

fg1417 commented Dec 15, 2021

The PR optimizes abs operations in the C2 middle end. Can I have your review please?

So what's the performance data before and after this patch? Does it also benefit on x86?

It would be better to provide a jmh micro benchmark. Thanks.

Thanks, @DamonFool . Yes, it's supposed to benefit all archs.
For example, here is the performance data on x86.

Before the patch:
Benchmark (seed) Mode Cnt Score Error Units
MathBench.absConstantInt 0 thrpt 5 291960.380 ± 10724.572 ops/ms

After the patch:
Benchmark (seed) Mode Cnt Score Error Units
MathBench.absConstantInt 0 thrpt 5 336271.533 ± 3778.210 ops/ms

The jmh micro benchmark testcase has been added in the latest commit.

@DamonFool
Copy link
Member

Hi @fg1417 ,

Thanks for your update.

Now I see that you are trying to optimize the following three abs() patterns:

  1. Math.abs(-38)
  2. (char) Math.abs((char) c)
  3. Math.abs(0 - x)

But did you see these code patterns in real programs?
I'm a bit worried that we just improve the complexity of C2 with (almost) no performance gain in the real world.
Thanks.

if (ti->is_con()) {
// Special case for min_jint: Math.abs(min_jint) = min_jint.
// Do not use C++ abs() for min_jint to avoid undefined behavior.
return (ti->is_con(min_jint)) ? TypeInt::MIN : TypeInt::make(abs(ti->get_con()));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return (ti->is_con(min_jint)) ? TypeInt::MIN : TypeInt::make(abs(ti->get_con()));
return TypeInt::make(uabs(ti->get_con());

We have uabs() for julong and unsigned int.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your review. Fixed.

@fg1417
Copy link
Author

fg1417 commented Dec 16, 2021

Hi @fg1417 ,

Thanks for your update.

Now I see that you are trying to optimize the following three abs() patterns:

  1. Math.abs(-38)
  2. (char) Math.abs((char) c)
  3. Math.abs(0 - x)

But did you see these code patterns in real programs? I'm a bit worried that we just improve the complexity of C2 with (almost) no performance gain in the real world. Thanks.

Thanks for your review, @DamonFool . I really understand your concern.

In terms of complexity, the change only involves AbsNode and doesn’t modify any other code part. I don’t think it will make C2 more complex.

As for performance gain in the real world, as we all know, the ability of GVN to optimize a node often depends on the optimized result of its input nodes. For example, if the input node of one AbsNode is recognized as a constant after last round of GVN optimization, now, we can optimize abs(constant) to a simple constant value. Like C2 did in

// Remove double negation
, we may not see -(-x) or (x+y)-y in any java program directly, but it’s possible after C2 optimization. Whether the optimization to sub or to abs is trivial, low-cost but useful. Why not apply it :)

Math.abs(-38) , (char) Math.abs((char) c) and Math.abs(0 - x) are just conformance testcases. As you said, maybe nobody writes these cases in the real world. These testcases are just simulating all possible scenarios that AbsNode may meet, to guarantee the correctness of the optimization.

What do you think :)

Thanks.

@DamonFool
Copy link
Member

Hi @fg1417 ,
Thanks for your update.
Now I see that you are trying to optimize the following three abs() patterns:

  1. Math.abs(-38)
  2. (char) Math.abs((char) c)
  3. Math.abs(0 - x)

But did you see these code patterns in real programs? I'm a bit worried that we just improve the complexity of C2 with (almost) no performance gain in the real world. Thanks.

Thanks for your review, @DamonFool . I really understand your concern.

In terms of complexity, the change only involves AbsNode and doesn’t modify any other code part. I don’t think it will make C2 more complex.

As for performance gain in the real world, as we all know, the ability of GVN to optimize a node often depends on the optimized result of its input nodes. For example, if the input node of one AbsNode is recognized as a constant after last round of GVN optimization, now, we can optimize abs(constant) to a simple constant value. Like C2 did in

// Remove double negation

, we may not see -(-x) or (x+y)-y in any java program directly, but it’s possible after C2 optimization. Whether the optimization to sub or to abs is trivial, low-cost but useful. Why not apply it :)
Math.abs(-38) , (char) Math.abs((char) c) and Math.abs(0 - x) are just conformance testcases. As you said, maybe nobody writes these cases in the real world. These testcases are just simulating all possible scenarios that AbsNode may meet, to guarantee the correctness of the optimization.

What do you think :)

Thanks.

Then, shall we also opt cases like Math.abs(-1 * x), Math.abs(x / (-1)), and so on?
Thanks.

@fg1417
Copy link
Author

fg1417 commented Dec 17, 2021

Hi @fg1417 ,
Thanks for your update.
Now I see that you are trying to optimize the following three abs() patterns:

  1. Math.abs(-38)
  2. (char) Math.abs((char) c)
  3. Math.abs(0 - x)

But did you see these code patterns in real programs? I'm a bit worried that we just improve the complexity of C2 with (almost) no performance gain in the real world. Thanks.

Thanks for your review, @DamonFool . I really understand your concern.
In terms of complexity, the change only involves AbsNode and doesn’t modify any other code part. I don’t think it will make C2 more complex.
As for performance gain in the real world, as we all know, the ability of GVN to optimize a node often depends on the optimized result of its input nodes. For example, if the input node of one AbsNode is recognized as a constant after last round of GVN optimization, now, we can optimize abs(constant) to a simple constant value. Like C2 did in

// Remove double negation

, we may not see -(-x) or (x+y)-y in any java program directly, but it’s possible after C2 optimization. Whether the optimization to sub or to abs is trivial, low-cost but useful. Why not apply it :)
Math.abs(-38) , (char) Math.abs((char) c) and Math.abs(0 - x) are just conformance testcases. As you said, maybe nobody writes these cases in the real world. These testcases are just simulating all possible scenarios that AbsNode may meet, to guarantee the correctness of the optimization.
What do you think :)
Thanks.

Then, shall we also opt cases like Math.abs(-1 * x), Math.abs(x / (-1)), and so on? Thanks.

Hi, @DamonFool .

Actually, the cases you listed above, Math.abs(-1 * x), Math.abs(x / (-1)), are covered by the optimized pattern, Math.abs(0-x).

In C2, -1 * x is going to be 0 – x after GVN optimization in MulNode. C2 takes -1 * x as 0 – (1*x) firstly, and Identity here,

Node* MulNode::Identity(PhaseGVN* phase) {
, will combine 1*x to x. After that, 0 – x matched the pattern that AbsNode can recognize. Math.abs(x / (-1)) , too. As I mentioned before, the AbsNode optimization doesn’t work as a standalone pass and it’s combined very closely to its input nodes. Instruction sequence changes as GVN repeats and our ideal pattern may occur.

Your cases also help prove that these several patterns, like abs(0-x) and abs(positive_value), are very fundamental and common. That’s why we choose them.

Thanks.

@DamonFool
Copy link
Member

Your cases also help prove that these several patterns, like abs(0-x) and abs(positive_value), are very fundamental and common.

I don't think so.
In real programs, I will never write code like Math.abs(0 - x), Math.abs(-1 * x) and Math.abs(x / (-1)).

How about improving your micro-bechmark to show the performance gain?
To be honest, benchmarking with Math.abs(-3) seems strange to me since I don't think people will write that code.
So I would suggest writing a jmh test which may be used in real programs.
Thanks.

Change-Id: Ie6f37ab159fb7092e1443b9af8d620562a45ae47
@DamonFool
Copy link
Member

I discussed this opt with @theRealAph offline.

To clarify from my point of view:

  1. I have no objection to this PR.
  2. I'd like to see a benchmark which people would write in real programs.
  3. But if the OpenJDK experts think it's already good enough, please go ahead.

Thanks.

@fg1417
Copy link
Author

fg1417 commented Dec 20, 2021

I discussed this opt with @theRealAph offline.

To clarify from my point of view:

  1. I have no objection to this PR.
  2. I'd like to see a benchmark which people would write in real programs.
  3. But if the OpenJDK experts think it's already good enough, please go ahead.

Thanks.

Hi, @DamonFool

I ran the jtreg test internally with some logging info to verify if the optimization works in real java program. The results shows that these patterns are hit in the following cases:

• java/lang/StackWalker/LocalsAndOperands.java#id0
• java/lang/StackWalker/LocalsAndOperands.java#id1
• java/lang/invoke/LFCaching/LFSingleThreadCachingTest.java
• java/util/concurrent/tck/JSR166TestCase.java
• javax/management/timer/MissingNotificationTest.java
• jdk/incubator/vector/Double128VectorTests.java
• jdk/incubator/vector/Double256VectorTests.java
• jdk/incubator/vector/Double512VectorTests.java
• jdk/incubator/vector/Double64VectorTests.java
• jdk/incubator/vector/DoubleMaxVectorTests.java
• jdk/incubator/vector/Float128VectorTests.java
• jdk/incubator/vector/Float256VectorTests.java
• jdk/incubator/vector/Float512VectorTests.java
• jdk/incubator/vector/Float64VectorTests.java
• jdk/incubator/vector/FloatMaxVectorTests.java
• jdk/incubator/vector/Vector128ConversionTests.java
• jdk/incubator/vector/Vector256ConversionTests.java
• jdk/incubator/vector/Vector64ConversionTests.java#id0
• jdk/incubator/vector/VectorMaxConversionTests.java

It’s not easy to identify these patterns from original java code by our eyes. Since the added code lines are hit, the patterns must occur after many rounds of optimization. Definitely, it benefits all platforms, whether x86 or aarch64.

As for the current benchmark, it’s not to show the real performance gain but to illustrate that the opto benefits x86 as well in case you wonder. If you need a real java program, the case won’t be light-weight or straightforward. Maybe I can’t provide you with a satisfying micro benchmark.

Thanks.

@DamonFool
Copy link
Member

I discussed this opt with @theRealAph offline.
To clarify from my point of view:

  1. I have no objection to this PR.
  2. I'd like to see a benchmark which people would write in real programs.
  3. But if the OpenJDK experts think it's already good enough, please go ahead.

Thanks.

Hi, @DamonFool

I ran the jtreg test internally with some logging info to verify if the optimization works in real java program. The results shows that these patterns are hit in the following cases:

• java/lang/StackWalker/LocalsAndOperands.java#id0 • java/lang/StackWalker/LocalsAndOperands.java#id1 • java/lang/invoke/LFCaching/LFSingleThreadCachingTest.java • java/util/concurrent/tck/JSR166TestCase.java • javax/management/timer/MissingNotificationTest.java • jdk/incubator/vector/Double128VectorTests.java • jdk/incubator/vector/Double256VectorTests.java • jdk/incubator/vector/Double512VectorTests.java • jdk/incubator/vector/Double64VectorTests.java • jdk/incubator/vector/DoubleMaxVectorTests.java • jdk/incubator/vector/Float128VectorTests.java • jdk/incubator/vector/Float256VectorTests.java • jdk/incubator/vector/Float512VectorTests.java • jdk/incubator/vector/Float64VectorTests.java • jdk/incubator/vector/FloatMaxVectorTests.java • jdk/incubator/vector/Vector128ConversionTests.java • jdk/incubator/vector/Vector256ConversionTests.java • jdk/incubator/vector/Vector64ConversionTests.java#id0 • jdk/incubator/vector/VectorMaxConversionTests.java

It’s not easy to identify these patterns from original java code by our eyes. Since the added code lines are hit, the patterns must occur after many rounds of optimization. Definitely, it benefits all platforms, whether x86 or aarch64.

As for the current benchmark, it’s not to show the real performance gain but to illustrate that the opto benefits x86 as well in case you wonder. If you need a real java program, the case won’t be light-weight or straightforward. Maybe I can’t provide you with a satisfying micro benchmark.

Thanks.

Good news!

But can you show us an example with more detailed analysis which pattern is applied in the test?
Thanks.

public class TestAbs {
private static int SIZE = 500;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not used?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Thanks.

Asserts.assertEquals(Long.MAX_VALUE, Math.abs(-Long.MAX_VALUE));

// Test abs(constant) optimization for float
Asserts.assertEquals(Float.NaN, Math.abs(Float.NaN));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would suggest something like:

assertTrue(Float.isNaN(Math.abs(Float.NaN)))

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Thanks.

Asserts.assertEquals(Double.MIN_VALUE, Math.abs(-Double.MIN_VALUE));
}

private static void testAbsTransformInt(int[] a) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you want to verify C2's transformation, probably we should use C2's IR test framework.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Thanks.

@fg1417
Copy link
Author

fg1417 commented Dec 20, 2021

But can you show us an example with more detailed analysis which pattern is applied in the test? Thanks.

Hi, @DamonFool

For example, jdk/incubator/vector/Float512VectorTests.java calls java.lang.FdLibm$Hypot::compute. You can check math classes in Fdlibm.java, like Hypot or a common one Pow, which call Math.abs(). After inline and many optimizations, such as constant propagation, the input value of Math.abs() is probably constant or (0-x). We can optimize it using this patch.

I learnt the optimization technique from the patch of my colleague, #2776 (comment)
The similar question was answered by Tobias in the conversation, and you can refer to it.

Thanks.

@DamonFool
Copy link
Member

But can you show us an example with more detailed analysis which pattern is applied in the test? Thanks.

Hi, @DamonFool

For example, jdk/incubator/vector/Float512VectorTests.java calls java.lang.FdLibm$Hypot::compute. You can check math classes in Fdlibm.java, like Hypot or a common one Pow, which calls Math.abs(). After inline and many optimizations, such as constant propagation, the input value of Math.abs() is probably constant or (0-x). We can optimize it using this patch.

I learnt the optimization technique from the patch of my colleague, #2776 (comment) The similar question was answered by Tobias in the conversation, and you can refer to it.

Thanks.

Very good!
You had proved that these patterns do exist in C2's opt passes, so this patch makes sense to me.
Thanks.

Copy link
Member

@TobiHartmann TobiHartmann left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me otherwise (expect the points about the test that @DamonFool already raised).

set_req(1, in1->in(2));
PhaseIterGVN* igvn = phase->is_IterGVN();
if (igvn) {
igvn->_worklist.push(in1);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is that needed? Because in1 could become dead? You should use set_req_X above.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Thanks.

Fei Gao added 2 commits January 14, 2022 06:18
Change-Id: I8220d54a443a39e04353688143db4b61428be2ad
Change-Id: Ia2372ed06fcc7c88285461a1b013898d9327c18e
@fg1417
Copy link
Author

fg1417 commented Jan 14, 2022

Thanks for your review, @DamonFool @TobiHartmann . I fixed all your points mentioned above.

Copy link
Member

@TobiHartmann TobiHartmann left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me.

@openjdk
Copy link

openjdk bot commented Jan 14, 2022

@fg1417 This change now passes all automated pre-integration checks.

ℹ️ This project also has non-automated pre-integration requirements. Please see the file CONTRIBUTING.md for details.

After integration, the commit message for the final commit will be:

8276673: Optimize abs operations in C2 compiler

Reviewed-by: thartmann, jiefu

You can use pull request commands such as /summary, /contributor and /issue to adjust it as needed.

At the time when this comment was updated there had been 25 new commits pushed to the master branch:

As there are no conflicts, your changes will automatically be rebased on top of these commits when integrating. If you prefer to avoid this automatic rebasing, please check the documentation for the /integrate command for further details.

As you do not have Committer status in this project an existing Committer must agree to sponsor your change. Possible candidates are the reviewers of this PR (@TobiHartmann, @DamonFool) but any other Committer may sponsor as well.

➡️ To flag this PR as ready for integration with the above commit message, type /integrate in a new comment. (Afterwards, your sponsor types /sponsor in a new comment to perform the integration).

@openjdk openjdk bot added the ready Pull request is ready to be integrated label Jan 14, 2022
Copy link
Member

@DamonFool DamonFool left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM
Thanks for the update.

@fg1417
Copy link
Author

fg1417 commented Jan 17, 2022

/integrate

@openjdk openjdk bot added the sponsor Pull request is ready to be sponsored label Jan 17, 2022
@openjdk
Copy link

openjdk bot commented Jan 17, 2022

@fg1417
Your change (at version 845d43a) is now ready to be sponsored by a Committer.

@DamonFool
Copy link
Member

/sponsor

@openjdk
Copy link

openjdk bot commented Jan 17, 2022

Going to push as commit c619666.
Since your change was applied there have been 25 commits pushed to the master branch:

Your commit was automatically rebased without conflicts.

@openjdk openjdk bot added the integrated Pull request has been integrated label Jan 17, 2022
@openjdk openjdk bot closed this Jan 17, 2022
@openjdk openjdk bot removed ready Pull request is ready to be integrated rfr Pull request is ready for review sponsor Pull request is ready to be sponsored labels Jan 17, 2022
@openjdk
Copy link

openjdk bot commented Jan 17, 2022

@DamonFool @fg1417 Pushed as commit c619666.

💡 You may see a message that your pull request was closed with unmerged commits. This can be safely ignored.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
hotspot-compiler hotspot-compiler-dev@openjdk.org integrated Pull request has been integrated
Development

Successfully merging this pull request may close these issues.

4 participants