Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

8282875: AArch64: [vectorapi] Optimize Vector.reduceLane for SVE 64/128 vector size #7999

Closed
wants to merge 4 commits into from

Conversation

e1iu
Copy link
Member

@e1iu e1iu commented Mar 28, 2022

This patch speeds up add/mul/min/max reductions for SVE for 64/128
vector size.

According to Neoverse N2/V1 software optimization guide[1][2], for
128-bit vector size reduction operations, we prefer using NEON
instructions instead of SVE instructions. This patch adds some rules to
distinguish 64/128 bits vector size with others, so that for these two
special cases, they can generate code the same as NEON. E.g., For
ByteVector.SPECIES_128, "ByteVector.reduceLanes(VectorOperators.ADD)"
generates code as below:

        Before:
        uaddv   d17, p0, z16.b
        smov    x15, v17.b[0]
        add     w15, w14, w15, sxtb

        After:
        addv    b17, v16.16b
        smov    x12, v17.b[0]
        add     w12, w12, w16, sxtb

No multiply reduction instruction in SVE, this patch generates code for
MulReductionVL by using scalar insnstructions for 128-bit vector size.

With this patch, all of them have performance gain for specific vector
micro benchmarks in my SVE testing system.

[1] https://developer.arm.com/documentation/pjdoc466751330-9685/latest/
[2] https://developer.arm.com/documentation/PJDOC-466751330-18256/0001

Change-Id: I4bef0b3eb6ad1bac582e4236aef19787ccbd9b1c


Progress

  • Change must not contain extraneous whitespace
  • Commit message must refer to an issue
  • Change must be properly reviewed (1 review required, with at least 1 reviewer)

Issue

  • JDK-8282875: AArch64: [vectorapi] Optimize Vector.reduceLane for SVE 64/128 vector size

Reviewing

Using git

Checkout this PR locally:
$ git fetch https://git.openjdk.java.net/jdk pull/7999/head:pull/7999
$ git checkout pull/7999

Update a local copy of the PR:
$ git checkout pull/7999
$ git pull https://git.openjdk.java.net/jdk pull/7999/head

Using Skara CLI tools

Checkout this PR locally:
$ git pr checkout 7999

View PR using the GUI difftool:
$ git pr show -t 7999

Using diff file

Download this PR as a diff file:
https://git.openjdk.java.net/jdk/pull/7999.diff

…28 vector size

This patch speeds up add/mul/min/max reductions for SVE for 64/128
vector size.

According to Neoverse N2/V1 software optimization guide[1][2], for
128-bit vector size reduction operations, we prefer using NEON
instructions instead of SVE instructions. This patch adds some rules to
distinguish 64/128 bits vector size with others, so that for these two
special cases, they can generate code the same as NEON. E.g., For
ByteVector.SPECIES_128, "ByteVector.reduceLanes(VectorOperators.ADD)"
generates code as below:

```
        Before:
        uaddv   d17, p0, z16.b
        smov    x15, v17.b[0]
        add     w15, w14, w15, sxtb

        After:
        addv    b17, v16.16b
        smov    x12, v17.b[0]
        add     w12, w12, w16, sxtb
```
No multiply reduction instruction in SVE, this patch generates code for
MulReductionVL by using scalar insnstructions for 128-bit vector size.

With this patch, all of them have performance gain for specific vector
micro benchmarks in my SVE testing system.

[1] https://developer.arm.com/documentation/pjdoc466751330-9685/latest/
[2] https://developer.arm.com/documentation/PJDOC-466751330-18256/0001

Change-Id: I4bef0b3eb6ad1bac582e4236aef19787ccbd9b1c
@bridgekeeper
Copy link

bridgekeeper bot commented Mar 28, 2022

👋 Welcome back eliu! A progress list of the required criteria for merging this PR into master will be added to the body of your pull request. There are additional pull request commands available for use with this pull request.

@openjdk openjdk bot added the rfr Pull request is ready for review label Mar 28, 2022
@openjdk
Copy link

openjdk bot commented Mar 28, 2022

@theRealELiu The following label will be automatically applied to this pull request:

  • hotspot-compiler

When this pull request is ready to be reviewed, an "RFR" email will be sent to the corresponding mailing list. If you would like to change these labels, use the /label pull request command.

@openjdk openjdk bot added the hotspot-compiler hotspot-compiler-dev@openjdk.org label Mar 28, 2022
@mlbridge
Copy link

mlbridge bot commented Mar 28, 2022

Webrevs

@theRealAph
Copy link
Contributor

Out of curiosity, how do we propose to distinguish when we should use SVE? I guess as long as we only have Neoverse V1/N2 for SVE it makes sense to concentrate on that, but I guess we'll want a better optimization model sooner or later.

@nsjian
Copy link

nsjian commented Mar 29, 2022

Out of curiosity, how do we propose to distinguish when we should use SVE? I guess as long as we only have Neoverse V1/N2 for SVE it makes sense to concentrate on that, but I guess we'll want a better optimization model sooner or later.

Indeed, that's what we need to resolve. Currently, we don't have a good model yet, so we just trying not to use SVE if it doesn't win. We are also trying to merge current neon/sve ad files into single one. Hopefully that will make codegen easier to choose SVE/NEON, e.g. for small vector lengths.

@theRealAph
Copy link
Contributor

Please include the benchmarks in this patch.

@e1iu
Copy link
Member Author

e1iu commented Mar 30, 2022

Please include the benchmarks in this patch.

We tested with the benchmarks in the vectorIntrinsics branch of openjdk/panama-vector, e.g, Byte128Vector.ADDLanes[1], Byte128Vector.MINLanes[2], Byte128Vector.MAXLanes[3], Byte128Vector.MULLanes[4]. Currently they are not in JDK main-line. I'm not sure if it's necessary to rewrite the same code for this tests in this patch.

[1] https://github.com/openjdk/panama-vector/blob/vectorIntrinsics/test/micro/org/openjdk/bench/jdk/incubator/vector/operation/Byte128Vector.java#L1022
[2] https://github.com/openjdk/panama-vector/blob/vectorIntrinsics/test/micro/org/openjdk/bench/jdk/incubator/vector/operation/Byte128Vector.java#L1086
[3] https://github.com/openjdk/panama-vector/blob/vectorIntrinsics/test/micro/org/openjdk/bench/jdk/incubator/vector/operation/Byte128Vector.java#L1118
[4] https://github.com/openjdk/panama-vector/blob/vectorIntrinsics/test/micro/org/openjdk/bench/jdk/incubator/vector/operation/Byte128Vector.java#L1054

@theRealAph
Copy link
Contributor

theRealAph commented Mar 30, 2022 via email

Change-Id: Ibc6b9c1f46c42cd07f7bb73b81ed38829e9d0975
@openjdk
Copy link

openjdk bot commented Apr 19, 2022

@theRealELiu this pull request can not be integrated into master due to one or more merge conflicts. To resolve these merge conflicts and update this pull request you can run the following commands in the local repository for your personal fork:

git checkout reduction
git fetch https://git.openjdk.java.net/jdk master
git merge FETCH_HEAD
# resolve conflicts and follow the instructions given by git merge
git commit -m "Merge master"
git push

@openjdk openjdk bot added the merge-conflict Pull request has merge conflict with target branch label Apr 19, 2022
@e1iu
Copy link
Member Author

e1iu commented Apr 19, 2022

@JoshuaZhuwj Could you help to take a look at this?

@JoshuaZhuwj
Copy link
Member

JoshuaZhuwj commented Apr 20, 2022

@theRealELiu your multiply reduction instruction support is very helpful.
See the following jmh performance gain in my SVE system.

Byte128Vector.MULLanes +862.54%
Byte128Vector.MULMaskedLanes +677.86%
Double128Vector.MULLanes +1611.86%
Double128Vector.MULMaskedLanes +1578.32%
Float128Vector.MULLanes +705.45%
Float128Vector.MULMaskedLanes +506.35%
Int128Vector.MULLanes +901.71%
Int128Vector.MULMaskedLanes +903.59%
Long128Vector.MULLanes +1353.17%
Long128Vector.MULMaskedLanes +1416.53%
Short128Vector.MULLanes +901.26%
Short128Vector.MULMaskedLanes +854.01%


For ADDLanes, I'm curious about a much better performance gain for Int128Vector, compared to other types.
Do you think it is align with your expectation?

Byte128Vector.ADDLanes +2.41%
Double128Vector.ADDLanes -0.25%
Float128Vector.ADDLanes -0.02%
Int128Vector.ADDLanes +40.61%
Long128Vector.ADDLanes +10.62%
Short128Vector.ADDLanes +5.27%

Byte128Vector.MAXLanes +2.22%
Double128Vector.MAXLanes +0.07%
Float128Vector.MAXLanes +0.02%
Int128Vector.MAXLanes +0.63%
Long128Vector.MAXLanes +0.01%
Short128Vector.MAXLanes +2.58%

Byte128Vector.MINLanes +1.88%
Double128Vector.MINLanes -0.11%
Float128Vector.MINLanes +0.05%
Int128Vector.MINLanes +0.29%
Long128Vector.MINLanes +0.08%
Short128Vector.MINLanes +2.44%

ins_pipe(pipe_class_default);
%}


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is all far too repetitive and (therefore) hard to maintain. Please use the macro processor in a sensible way.

Please isolate the common factors.
n->in(X)->bottom_type()->is_vect()->length_in_bytes() should have a name, for example.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have tried. That tricky thing is that I didn't find a sensible way to integrate them in a macro and balance the readability of m4, and the format of ad as well. One reason is they have different register usage, also accompanies with the different predicate. In the example below, if it's fine to waste one register for reduce_mul_sve_4S, thing would change more easier, that all the rules can merged together. But to pursue the better performance, at this moment I degrade the maintainability and write more repetitive code.

instruct reduce_mul_sve_4S(iRegINoSp dst, iRegIorL2I isrc, vReg vsrc, vReg vtmp) %{
  predicate(UseSVE > 0 &&
            n->in(2)->bottom_type()->is_vect()->length_in_bytes() == 8 &&
            n->in(2)->bottom_type()->is_vect()->element_basic_type() == T_SHORT);
  match(Set dst (MulReductionVI isrc vsrc));
  ins_cost(8 * INSN_COST);
  effect(TEMP_DEF dst, TEMP vtmp);
  format %{ "neon_mul_reduction_integral $dst, $isrc, $vsrc\t# mul reduction4S (sve)" %}
  ins_encode %{
    __ neon_mul_reduction_integral(as_Register($dst$$reg), T_SHORT, as_Register($isrc$$reg),
                                   as_FloatRegister($vsrc$$reg), /* vector_length_in_bytes */ 8,
                                   as_FloatRegister($vtmp$$reg), fnoreg);
  %}
  ins_pipe(pipe_slow);
%}

instruct reduce_mul_sve_8S(iRegINoSp dst, iRegIorL2I isrc, vReg vsrc, vReg vtmp1, vReg vtmp2) %{
  predicate(UseSVE > 0 &&
            n->in(2)->bottom_type()->is_vect()->length_in_bytes() == 16 &&
            n->in(2)->bottom_type()->is_vect()->element_basic_type() == T_SHORT);
  match(Set dst (MulReductionVI isrc vsrc));
  ins_cost(10 * INSN_COST);
  effect(TEMP_DEF dst, TEMP vtmp1, TEMP vtmp2);
  format %{ "neon_mul_reduction_integral $dst, $isrc, $vsrc\t# mul reduction8S (sve)" %}
  ins_encode %{
    __ neon_mul_reduction_integral(as_Register($dst$$reg), T_SHORT, as_Register($isrc$$reg),
                                   as_FloatRegister($vsrc$$reg), /* vector_length_in_bytes */ 16,
                                   as_FloatRegister($vtmp1$$reg), as_FloatRegister($vtmp2$$reg));
  %}
  ins_pipe(pipe_slow);
%}

Indeed, we are looking for a better way to maintain the NEON and SVE rules. @nsjian is working on the detail work.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. There are 8 slightly different versions of reduce_X_sve_nT here. I would have thought that an ifelse around , vReg vtmp2 etc. would be exactly what you'd need, but I'm not going to try to rewrite your work.
I'm no great fan of m4, but I used it because we needed some way to write the boilerplate code such that it could be reviewed and extended; that's still true today. It's not perfect, but it's better than cut-and-paste programming.

@e1iu
Copy link
Member Author

e1iu commented Apr 21, 2022

@theRealELiu your multiply reduction instruction support is very helpful. See the following jmh performance gain in my SVE system.

Byte128Vector.MULLanes +862.54% Byte128Vector.MULMaskedLanes +677.86% Double128Vector.MULLanes +1611.86% Double128Vector.MULMaskedLanes +1578.32% Float128Vector.MULLanes +705.45% Float128Vector.MULMaskedLanes +506.35% Int128Vector.MULLanes +901.71% Int128Vector.MULMaskedLanes +903.59% Long128Vector.MULLanes +1353.17% Long128Vector.MULMaskedLanes +1416.53% Short128Vector.MULLanes +901.26% Short128Vector.MULMaskedLanes +854.01%

For ADDLanes, I'm curious about a much better performance gain for Int128Vector, compared to other types. Do you think it is align with your expectation?

Byte128Vector.ADDLanes +2.41% Double128Vector.ADDLanes -0.25% Float128Vector.ADDLanes -0.02% Int128Vector.ADDLanes +40.61% Long128Vector.ADDLanes +10.62% Short128Vector.ADDLanes +5.27%

Byte128Vector.MAXLanes +2.22% Double128Vector.MAXLanes +0.07% Float128Vector.MAXLanes +0.02% Int128Vector.MAXLanes +0.63% Long128Vector.MAXLanes +0.01% Short128Vector.MAXLanes +2.58%

Byte128Vector.MINLanes +1.88% Double128Vector.MINLanes -0.11% Float128Vector.MINLanes +0.05% Int128Vector.MINLanes +0.29% Long128Vector.MINLanes +0.08% Short128Vector.MINLanes +2.44%

I don't know what hardware you were tested but I expect all of them should be improved as the software optimization guide described. Perhaps your hardware has some potential optimizations for SVE on those types. I have checked the public guide of V1 [1], N2 [2] and A64FX [3].

[1] https://developer.arm.com/documentation/pjdoc466751330-9685/latest/
[2] https://developer.arm.com/documentation/PJDOC-466751330-18256/0001
[3] https://github.com/fujitsu/A64FX/blob/master/doc/A64FX_Microarchitecture_Manual_en_1.6.pdf

Eric Liu added 2 commits April 22, 2022 06:24
Change-Id: I275eb5834eacce029bc286b1b48128f07dd4070e
Change-Id: I7d76e606485727ca1f3de1d3af733f7e28fb9867
@JoshuaZhuwj
Copy link
Member

@theRealELiu your multiply reduction instruction support is very helpful. See the following jmh performance gain in my SVE system.
Byte128Vector.MULLanes +862.54% Byte128Vector.MULMaskedLanes +677.86% Double128Vector.MULLanes +1611.86% Double128Vector.MULMaskedLanes +1578.32% Float128Vector.MULLanes +705.45% Float128Vector.MULMaskedLanes +506.35% Int128Vector.MULLanes +901.71% Int128Vector.MULMaskedLanes +903.59% Long128Vector.MULLanes +1353.17% Long128Vector.MULMaskedLanes +1416.53% Short128Vector.MULLanes +901.26% Short128Vector.MULMaskedLanes +854.01%
For ADDLanes, I'm curious about a much better performance gain for Int128Vector, compared to other types. Do you think it is align with your expectation?
Byte128Vector.ADDLanes +2.41% Double128Vector.ADDLanes -0.25% Float128Vector.ADDLanes -0.02% Int128Vector.ADDLanes +40.61% Long128Vector.ADDLanes +10.62% Short128Vector.ADDLanes +5.27%
Byte128Vector.MAXLanes +2.22% Double128Vector.MAXLanes +0.07% Float128Vector.MAXLanes +0.02% Int128Vector.MAXLanes +0.63% Long128Vector.MAXLanes +0.01% Short128Vector.MAXLanes +2.58%
Byte128Vector.MINLanes +1.88% Double128Vector.MINLanes -0.11% Float128Vector.MINLanes +0.05% Int128Vector.MINLanes +0.29% Long128Vector.MINLanes +0.08% Short128Vector.MINLanes +2.44%

I don't know what hardware you were tested but I expect all of them should be improved as the software optimization guide described. Perhaps your hardware has some potential optimizations for SVE on those types. I have checked the public guide of V1 [1], N2 [2] and A64FX [3].

[1] https://developer.arm.com/documentation/pjdoc466751330-9685/latest/ [2] https://developer.arm.com/documentation/PJDOC-466751330-18256/0001 [3] https://github.com/fujitsu/A64FX/blob/master/doc/A64FX_Microarchitecture_Manual_en_1.6.pdf

I have only one test machine hence I cannot provide more performance data on different microarchitectures.
Although performance gains for different types are distinct, at least no regression is found in non-masked reductions after you replaced SVE instructions with that of NEON.
Your change makes sense according to these Software Optimization Guides you refer to.

@openjdk openjdk bot removed the merge-conflict Pull request has merge conflict with target branch label May 13, 2022
@bridgekeeper
Copy link

bridgekeeper bot commented Jun 10, 2022

@theRealELiu This pull request has been inactive for more than 4 weeks and will be automatically closed if another 4 weeks passes without any activity. To avoid this, simply add a new comment to the pull request. Feel free to ask for assistance if you need help with progressing this pull request towards integration!

@bridgekeeper
Copy link

bridgekeeper bot commented Jul 8, 2022

@theRealELiu This pull request has been inactive for more than 8 weeks and will now be automatically closed. If you would like to continue working on this pull request in the future, feel free to reopen it! This can be done using the /open pull request command.

@bridgekeeper bridgekeeper bot closed this Jul 8, 2022
@e1iu e1iu deleted the reduction branch July 10, 2023 09:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
hotspot-compiler hotspot-compiler-dev@openjdk.org rfr Pull request is ready for review
4 participants