Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

8292898: [vectorapi] Unify vector mask cast operation #10192

Closed
wants to merge 18 commits into from

Conversation

XiaohongGong
Copy link

@XiaohongGong XiaohongGong commented Sep 7, 2022

The current implementation of the vector mask cast operation is
complex that the compiler generates different patterns for different
scenarios. For architectures that do not support the predicate
feature, vector mask is represented the same as the normal vector.
So the vector mask cast is implemented by VectorCast node. But this
is not always needed. When two masks have the same element size (e.g.
int vs. float), their bits layout are the same. So casting between
them does not need to emit any instructions.

Currently the compiler generates different patterns based on the
vector type of the input/output and the platforms. Normally the
"VectorMaskCast" op is only used for cases that doesn't emit any
instructions, and "VectorCast" op is used to implement the necessary
expand/narrow operations. This can avoid adding some duplicate rules
in the backend. However, this also has the drawbacks:

  1. The codes are complex, especially when the compiler needs to
    check whether the hardware supports the necessary IRs for the
    vector mask cast. It needs to check different patterns for
    different cases.
  2. The vector mask cast operation could be implemented with cheaper
    instructions than the vector casting on some architectures.

Instead of generating VectorCast or VectorMaskCast nodes for different
cases of vector mask cast operations, this patch unifies the vector
mask cast implementation with "VectorMaskCast" node for all vector types
and platforms. The missing backend rules are also added for it.

This patch also simplies the vector mask conversion happened in
"VectorUnbox::Ideal()". Normally "VectorUnbox (VectorBox vmask)" can
be optimized to "vmask" if the unboxing type matches with the boxed
"vmask" type. Otherwise, it needs the type conversion. Currently the
"VectorUnbox" will be transformed to two different patterns to implement
the conversion:

  1. If the element size is not changed, it is transformed to:
    "VectorMaskCast vmask"
  1. Otherwise, it is transformed to:
    "VectorLoadMask (VectorStoreMask vmask)"

It firstly converts the "vmask" to a boolean vector with "VectorStoreMask",
and then uses "VectorLoadMask" to convert the boolean vector to the
dst mask vector. Since this patch makes "VectorMaskCast" op supported
for all types on all platforms, it doesn't need the "VectorLoadMask" and
"VectorStoreMask" to do the conversion. The existing transformation:

  VectorUnbox (VectorBox vmask) => VectorLoadMask (VectorStoreMask vmask)

can be simplified to:

  VectorUnbox (VectorBox vmask) => VectorMaskCast vmask

Progress

  • Change must be properly reviewed (1 review required, with at least 1 Reviewer)
  • Change must not contain extraneous whitespace
  • Commit message must refer to an issue

Issue

  • JDK-8292898: [vectorapi] Unify vector mask cast operation

Reviewers

Contributors

  • Quan Anh Mai <qamai@openjdk.org>

Reviewing

Using git

Checkout this PR locally:
$ git fetch https://git.openjdk.org/jdk pull/10192/head:pull/10192
$ git checkout pull/10192

Update a local copy of the PR:
$ git checkout pull/10192
$ git pull https://git.openjdk.org/jdk pull/10192/head

Using Skara CLI tools

Checkout this PR locally:
$ git pr checkout 10192

View PR using the GUI difftool:
$ git pr show -t 10192

Using diff file

Download this PR as a diff file:
https://git.openjdk.org/jdk/pull/10192.diff

@bridgekeeper
Copy link

bridgekeeper bot commented Sep 7, 2022

👋 Welcome back xgong! A progress list of the required criteria for merging this PR into pr/9737 will be added to the body of your pull request. There are additional pull request commands available for use with this pull request.

@openjdk openjdk bot added the rfr Pull request is ready for review label Sep 7, 2022
@openjdk
Copy link

openjdk bot commented Sep 7, 2022

@XiaohongGong The following label will be automatically applied to this pull request:

  • hotspot-compiler

When this pull request is ready to be reviewed, an "RFR" email will be sent to the corresponding mailing list. If you would like to change these labels, use the /label pull request command.

@openjdk openjdk bot added the hotspot-compiler hotspot-compiler-dev@openjdk.org label Sep 7, 2022
@mlbridge
Copy link

mlbridge bot commented Sep 7, 2022

Webrevs

@openjdk-notifier
Copy link

@XiaohongGong Please do not rebase or force-push to an active PR as it invalidates existing review comments. All changes will be squashed into a single commit automatically when integrating. See OpenJDK Developers’ Guide for more information.

@XiaohongGong
Copy link
Author

/contributor add qamai

@openjdk
Copy link

openjdk bot commented Sep 8, 2022

@XiaohongGong
Contributor Quan Anh Mai <qamai@openjdk.org> successfully added.

@XiaohongGong
Copy link
Author

Hi, could anyone please help to take a look at this PR? Thanks in advance!

@XiaohongGong
Copy link
Author

Hi @jatin-bhateja, @DamonFool , could you please help to take a look at this PR? Thanks a lot!

@XiaohongGong
Copy link
Author

Hi @sviswa7, could you please help to take a look at the x86 codegen part? Thanks so much!

@openjdk-notifier
Copy link

The dependent pull request has now been integrated, and the target branch of this pull request has been updated. This means that changes from the dependent pull request can start to show up as belonging to this pull request, which may be confusing for reviewers. To remedy this situation, simply merge the latest changes from the new target branch into this pull request by running commands similar to these in the local repository for your personal fork:

git checkout JDK-8292898
git fetch https://git.openjdk.org/jdk master
git merge FETCH_HEAD
# if there are conflicts, follow the instructions given by git merge
git commit -m "Merge master"
git push

@openjdk
Copy link

openjdk bot commented Sep 16, 2022

@XiaohongGong this pull request can not be integrated into master due to one or more merge conflicts. To resolve these merge conflicts and update this pull request you can run the following commands in the local repository for your personal fork:

git checkout JDK-8292898
git fetch https://git.openjdk.org/jdk master
git merge FETCH_HEAD
# resolve conflicts and follow the instructions given by git merge
git commit -m "Merge master"
git push

@openjdk openjdk bot added the merge-conflict Pull request has merge conflict with target branch label Sep 16, 2022
@openjdk openjdk bot removed the merge-conflict Pull request has merge conflict with target branch label Sep 16, 2022
@XiaohongGong
Copy link
Author

Hi @jatin-bhateja , the IR test has been added. Could you please help to review again? Thanks a lot!

@jatin-bhateja
Copy link
Member

jatin-bhateja commented Oct 4, 2022

Hi @jatin-bhateja , the IR test has been added. Could you please help to review again? Thanks a lot!

Some of the IR tests like testByte64ToLong512 are currently are failing on KNL due to following check
https://github.com/openjdk/jdk/blob/master/src/hotspot/share/opto/vectorIntrinsics.cpp#L2484

since source and destination ideal types are different (TypeVect vs TypeVectMask), can you kindly change the feature check for relevant IR tests to avx512vl till we remove that limitation.

@XiaohongGong
Copy link
Author

Hi @jatin-bhateja , the IR test has been added. Could you please help to review again? Thanks a lot!

Some of the IR tests like testByte64ToLong512 are currently are failing on KNL due to following check https://github.com/openjdk/jdk/blob/master/src/hotspot/share/opto/vectorIntrinsics.cpp#L2484

since source and destination ideal types are different (TypeVect vs TypeVectMask), can you kindly change the feature check for relevant IR tests to avx512vl till we remove that limitation.

Thanks for pointing out this issue. Sure, I will limit the feature check to "avx512vl" for all the 512 bits related casting. BTW, could you please show me how to run the test with KNL feature? So that I can have an internal test before pushing the changes. Thanks a lot!

@XiaohongGong
Copy link
Author

Hi @jatin-bhateja , the IR test has been added. Could you please help to review again? Thanks a lot!

Some of the IR tests like testByte64ToLong512 are currently are failing on KNL due to following check https://github.com/openjdk/jdk/blob/master/src/hotspot/share/opto/vectorIntrinsics.cpp#L2484
since source and destination ideal types are different (TypeVect vs TypeVectMask), can you kindly change the feature check for relevant IR tests to avx512vl till we remove that limitation.

Thanks for pointing out this issue. Sure, I will limit the feature check to "avx512vl" for all the 512 bits related casting. BTW, could you please show me how to run the test with KNL feature? So that I can have an internal test before pushing the changes. Thanks a lot!

Hi @jatin-bhateja , the test is updated. I tested it with -XX:+UseKNLSetting by adding the flag to TestFramework.runWithFlags() in the main function, and tests pass. Could you please help to check whether it is ok for you? Thanks a lot!

@jatin-bhateja
Copy link
Member

Hi @jatin-bhateja , the IR test has been added. Could you please help to review again? Thanks a lot!

Some of the IR tests like testByte64ToLong512 are currently are failing on KNL due to following check https://github.com/openjdk/jdk/blob/master/src/hotspot/share/opto/vectorIntrinsics.cpp#L2484
since source and destination ideal types are different (TypeVect vs TypeVectMask), can you kindly change the feature check for relevant IR tests to avx512vl till we remove that limitation.

Thanks for pointing out this issue. Sure, I will limit the feature check to "avx512vl" for all the 512 bits related casting. BTW, could you please show me how to run the test with KNL feature? So that I can have an internal test before pushing the changes. Thanks a lot!

Hi @jatin-bhateja , the test is updated. I tested it with -XX:+UseKNLSetting by adding the flag to
TestFramework.runWithFlags() in the main function, and tests pass. Could you please help to check whether it is ok for you?
Thanks!, we can also pass additional flag in JTREG_WHITELIST_FLAGS in TestFramework.java
Thanks a lot!

Hi @XiaohongGong , Thanks for addressing my comments, test now passes on KNL platform.
Newly introduced @WarmUp annotation in all the tests looks redundant since in NORMAL run-mode framework does the necessary warmup followed by compilation by "C2' (default compiler).

Copy link
Member

@jatin-bhateja jatin-bhateja left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rest of the common IR and X86 backend changes looks good to me, you may need a second approval.
Please remove additional warmup introduced in tests.

@openjdk
Copy link

openjdk bot commented Oct 10, 2022

@XiaohongGong This change now passes all automated pre-integration checks.

ℹ️ This project also has non-automated pre-integration requirements. Please see the file CONTRIBUTING.md for details.

After integration, the commit message for the final commit will be:

8292898: [vectorapi] Unify vector mask cast operation

Co-authored-by: Quan Anh Mai <qamai@openjdk.org>
Reviewed-by: jbhateja, eliu

You can use pull request commands such as /summary, /contributor and /issue to adjust it as needed.

At the time when this comment was updated there had been 3 new commits pushed to the master branch:

  • 97f1321: 8294356: IGV: scheduled graphs contain duplicated elements
  • 5e05e42: 8294901: remove pre-VS2017 checks in Windows related coding
  • e775acf: 8293986: Incorrect double-checked locking in com.sun.beans.introspect.ClassInfo

Please see this link for an up-to-date comparison between the source branch of this pull request and the master branch.
As there are no conflicts, your changes will automatically be rebased on top of these commits when integrating. If you prefer to avoid this automatic rebasing, please check the documentation for the /integrate command for further details.

➡️ To integrate this PR with the above commit message to the master branch, type /integrate in a new comment.

@openjdk openjdk bot added the ready Pull request is ready to be integrated label Oct 10, 2022
@XiaohongGong
Copy link
Author

Hi @jatin-bhateja , the IR test has been added. Could you please help to review again? Thanks a lot!

Some of the IR tests like testByte64ToLong512 are currently are failing on KNL due to following check https://github.com/openjdk/jdk/blob/master/src/hotspot/share/opto/vectorIntrinsics.cpp#L2484
since source and destination ideal types are different (TypeVect vs TypeVectMask), can you kindly change the feature check for relevant IR tests to avx512vl till we remove that limitation.

Thanks for pointing out this issue. Sure, I will limit the feature check to "avx512vl" for all the 512 bits related casting. BTW, could you please show me how to run the test with KNL feature? So that I can have an internal test before pushing the changes. Thanks a lot!

Hi @jatin-bhateja , the test is updated. I tested it with -XX:+UseKNLSetting by adding the flag to
TestFramework.runWithFlags() in the main function, and tests pass. Could you please help to check whether it is ok for you?
Thanks!, we can also pass additional flag in JTREG_WHITELIST_FLAGS in TestFramework.java
Thanks a lot!

Hi @XiaohongGong , Thanks for addressing my comments, test now passes on KNL platform. Newly introduced @WarmUp annotation in all the tests looks redundant since in NORMAL run-mode framework does the necessary warmup followed by compilation by "C2' (default compiler).

Hi @jatin-bhateja , thanks for looking at the changes again! Yes, you are right that the fromework has a default warmup (2000). But I'd like to keep the new added 10000 here, because I met the IR check failing issues when I wrote another IR test and set the warmup as 5000 before. To be honest,I don't know why it fails since the method is compiled by C2 but the compiler shows it lost some information, which made the expected IR not generated. So adding the larger warmup is safe for me. WDYT? Thanks!

Copy link
Member

@e1iu e1iu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@merykitty
Copy link
Member

@XiaohongGong You can set default warmup iterations using TestFramework::setDefaultWarmup instead of annotating all methods.

@XiaohongGong
Copy link
Author

@XiaohongGong You can set default warmup iterations using TestFramework::setDefaultWarmup instead of annotating all methods.

Good idea. I will change with this way and try to set a smaller warmup. Thanks!

@jatin-bhateja
Copy link
Member

jatin-bhateja commented Oct 10, 2022

Hi @jatin-bhateja , the IR test has been added. Could you please help to review again? Thanks a lot!

Some of the IR tests like testByte64ToLong512 are currently are failing on KNL due to following check https://github.com/openjdk/jdk/blob/master/src/hotspot/share/opto/vectorIntrinsics.cpp#L2484
since source and destination ideal types are different (TypeVect vs TypeVectMask), can you kindly change the feature check for relevant IR tests to avx512vl till we remove that limitation.

Thanks for pointing out this issue. Sure, I will limit the feature check to "avx512vl" for all the 512 bits related casting. BTW, could you please show me how to run the test with KNL feature? So that I can have an internal test before pushing the changes. Thanks a lot!

Hi @jatin-bhateja , the test is updated. I tested it with -XX:+UseKNLSetting by adding the flag to
TestFramework.runWithFlags() in the main function, and tests pass. Could you please help to check whether it is ok for you?
Thanks!, we can also pass additional flag in JTREG_WHITELIST_FLAGS in TestFramework.java
Thanks a lot!

Hi @XiaohongGong , Thanks for addressing my comments, test now passes on KNL platform. Newly introduced @WarmUp annotation in all the tests looks redundant since in NORMAL run-mode framework does the necessary warmup followed by compilation by "C2' (default compiler).

Hi @jatin-bhateja , thanks for looking at the changes again! Yes, you are right that the fromework has a default warmup (2000). But I'd like to keep the new added 10000 here, because I met the IR check failing issues when I wrote another IR test and set the warmup as 5000 before. To be honest,I don't know why it fails since the method is compiled by C2 but the compiler shows it lost some information, which made the expected IR not generated. So adding the larger warmup is safe for me. WDYT? Thanks!

Framework is using white box APIs to enqueue test methods to compile queues from which they will be picked up by respective compilers, so test method compilation here is agnostic to warmup invocation count, but warmup will ensure that some of the closed world assumptions needed for intrinsification are met.

Changes still looks good to me. Thanks!

@XiaohongGong
Copy link
Author

Hi @jatin-bhateja , the IR test has been added. Could you please help to review again? Thanks a lot!

Some of the IR tests like testByte64ToLong512 are currently are failing on KNL due to following check https://github.com/openjdk/jdk/blob/master/src/hotspot/share/opto/vectorIntrinsics.cpp#L2484
since source and destination ideal types are different (TypeVect vs TypeVectMask), can you kindly change the feature check for relevant IR tests to avx512vl till we remove that limitation.

Thanks for pointing out this issue. Sure, I will limit the feature check to "avx512vl" for all the 512 bits related casting. BTW, could you please show me how to run the test with KNL feature? So that I can have an internal test before pushing the changes. Thanks a lot!

Hi @jatin-bhateja , the test is updated. I tested it with -XX:+UseKNLSetting by adding the flag to
TestFramework.runWithFlags() in the main function, and tests pass. Could you please help to check whether it is ok for you?
Thanks!, we can also pass additional flag in JTREG_WHITELIST_FLAGS in TestFramework.java
Thanks a lot!

Hi @XiaohongGong , Thanks for addressing my comments, test now passes on KNL platform. Newly introduced @WarmUp annotation in all the tests looks redundant since in NORMAL run-mode framework does the necessary warmup followed by compilation by "C2' (default compiler).

Hi @jatin-bhateja , thanks for looking at the changes again! Yes, you are right that the fromework has a default warmup (2000). But I'd like to keep the new added 10000 here, because I met the IR check failing issues when I wrote another IR test and set the warmup as 5000 before. To be honest,I don't know why it fails since the method is compiled by C2 but the compiler shows it lost some information, which made the expected IR not generated. So adding the larger warmup is safe for me. WDYT? Thanks!

Framework is using white box APIs to enqueue test methods to compile queues from which they will be picked up by respective compilers, so test method compilation here is agnostic to warmup invocation count, but warmup will ensure that some of the closed world assumptions needed for intrinsification are met.

Changes still looks good to me. Thanks!

I see. Thanks a lot for the clarification and reviewing!

@merykitty
Copy link
Member

Actually I also encountered intrinsification failures while working on JDK-8259610 when setting the warmup iterations too low (the INVOCATIONS is set to 10000 in those tests). The cause is unknown to me, probably because some information fails to be propagated through the inlining. This can be seen frequently using -XX:+PrintIntrinsics, although the compiler will eventually manage to get the required constant information. As a result, I think setting a warmup iterations of 10000 is alright here. Thanks.

@XiaohongGong
Copy link
Author

The GHA test pass all here https://github.com/XiaohongGong/jdk/actions/runs/3224880587

@XiaohongGong
Copy link
Author

/integrate

@openjdk
Copy link

openjdk bot commented Oct 12, 2022

Going to push as commit ab8c136.
Since your change was applied there have been 22 commits pushed to the master branch:

  • 2ceb80c: 8288043: Optimize FP to word/sub-word integral type conversion on X86 AVX2 platforms
  • 703a6ef: 8283699: Improve the peephole mechanism of hotspot
  • 94a9b04: 8295013: OopStorage should derive from CHeapObjBase
  • 3a980b9: 8295168: Remove superfluous period in @throws tag description
  • 9bb932c: 8295154: Documentation for RemoteExecutionControl.invoke(Method) inherits non-existent documentation
  • 945950d: 8295069: [PPC64] Performance regression after JDK-8290025
  • d362e16: 8294689: The SA transported_core.html file needs quite a bit of work
  • 07946aa: 8289552: Make intrinsic conversions between bit representations of half precision values and floats
  • 2586b1a: 8295155: Incorrect javadoc of java.base module
  • e1a77cf: 8295163: Remove old hsdis Makefile
  • ... and 12 more: https://git.openjdk.org/jdk/compare/9d116ec147a3182a9c831ffdce02c98da8c5031d...master

Your commit was automatically rebased without conflicts.

@openjdk openjdk bot added the integrated Pull request has been integrated label Oct 12, 2022
@openjdk openjdk bot closed this Oct 12, 2022
@openjdk openjdk bot removed ready Pull request is ready to be integrated rfr Pull request is ready for review labels Oct 12, 2022
@openjdk
Copy link

openjdk bot commented Oct 12, 2022

@XiaohongGong Pushed as commit ab8c136.

💡 You may see a message that your pull request was closed with unmerged commits. This can be safely ignored.

@XiaohongGong XiaohongGong deleted the JDK-8292898 branch October 12, 2022 01:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
hotspot-compiler hotspot-compiler-dev@openjdk.org integrated Pull request has been integrated
5 participants