Skip to content

8294588: Auto vectorize half precision floating point conversion APIs#11471

Closed
smita-kamath wants to merge 9 commits intoopenjdk:masterfrom
smita-kamath:JDK-8294588
Closed

8294588: Auto vectorize half precision floating point conversion APIs#11471
smita-kamath wants to merge 9 commits intoopenjdk:masterfrom
smita-kamath:JDK-8294588

Conversation

@smita-kamath
Copy link

@smita-kamath smita-kamath commented Dec 2, 2022

Hi All,

I have added changes for autovectorizing Float.float16ToFloat and Float.floatToFloat16 API's.
Following are the performance numbers of JMH micro Fp16ConversionBenchmark:
Before code changes:
Benchmark | (size) | Mode | Cnt | Score | Error | Units
Fp16ConversionBenchmark.float16ToFloat | 2048 | thrpt | 3 | 1044.653 | ±     0.041 | ops/ms
Fp16ConversionBenchmark.float16ToFloatMemory | 2048 | thrpt | 3 | 2341529.9 | ± 11765.453 | ops/ms
Fp16ConversionBenchmark.floatToFloat16 | 2048 | thrpt | 3 | 2156.662 | ±     0.653 | ops/ms
Fp16ConversionBenchmark.floatToFloat16Memory | 2048 | thrpt | 3 | 2007988.1 | ±   361.696 | ops/ms

After:
Benchmark | (size) | Mode | Cnt | Score | Error | Units
Fp16ConversionBenchmark.float16ToFloat | 2048 | thrpt | 3 | 20460.349 |± 372.327 | ops/ms
Fp16ConversionBenchmark.float16ToFloatMemory | 2048 | thrpt | 3 | 2342125.200 |± 9250.899 |ops/ms
Fp16ConversionBenchmark.floatToFloat16 | 2048 | thrpt | 3 | 22553.977 |± 483.034 | ops/ms
Fp16ConversionBenchmark.floatToFloat16Memory | 2048 | thrpt | 3 | 2007899.797 |± 150.296 | ops/ms

Kindly review and share your feedback.

Thanks.
Smita


Progress

  • Change must be properly reviewed (1 review required, with at least 1 Reviewer)
  • Change must not contain extraneous whitespace
  • Commit message must refer to an issue

Issue

  • JDK-8294588: Auto vectorize half precision floating point conversion APIs

Reviewers

Reviewing

Using git

Checkout this PR locally:
$ git fetch https://git.openjdk.org/jdk pull/11471/head:pull/11471
$ git checkout pull/11471

Update a local copy of the PR:
$ git checkout pull/11471
$ git pull https://git.openjdk.org/jdk pull/11471/head

Using Skara CLI tools

Checkout this PR locally:
$ git pr checkout 11471

View PR using the GUI difftool:
$ git pr show -t 11471

Using diff file

Download this PR as a diff file:
https://git.openjdk.org/jdk/pull/11471.diff

@bridgekeeper
Copy link

bridgekeeper bot commented Dec 2, 2022

👋 Welcome back svkamath! A progress list of the required criteria for merging this PR into master will be added to the body of your pull request. There are additional pull request commands available for use with this pull request.

@smita-kamath
Copy link
Author

label /hotspot

@openjdk
Copy link

openjdk bot commented Dec 2, 2022

@smita-kamath The following label will be automatically applied to this pull request:

  • hotspot-compiler

When this pull request is ready to be reviewed, an "RFR" email will be sent to the corresponding mailing list. If you would like to change these labels, use the /label pull request command.

@openjdk openjdk bot added the hotspot-compiler hotspot-compiler-dev@openjdk.org label Dec 2, 2022
@smita-kamath smita-kamath marked this pull request as ready for review December 2, 2022 06:41
@openjdk openjdk bot added the rfr Pull request is ready for review label Dec 2, 2022
@mlbridge
Copy link

mlbridge bot commented Dec 2, 2022

System.out.println("PASSED");
}

@Test
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New IR node checking annotations missing.

virtual int Opcode() const;
};

class HF2FVNode : public VectorNode {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may use same naming convention as used for other vector casting IR nodes
VectorCastH2F and F2H

ins_pipe( pipe_slow );
%}

instruct vconvF2HF(vec dst, vec src) %{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do have a destination memory flavour of VCVTPS2PH, adding a memory pattern will fold subsequent store in one instruction.

Comment on lines 1687 to 1688
case Op_F2HFV:
if (!VM_Version::supports_f16c() && !VM_Version::supports_avx512vl()) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need different check for vector flavors (HF2FV/F2HV) vs the scalar flavors (ConvF2HF/ConvHF2F).
The check needed for vector flavors is:
if (!VM_Version::supports_f16c() && !VM_Version::supports_avx512()) { return false; }

Also in vm_version_x86.cpp, the F16C features should be disabled when UseAVX is set to 0, i.e. the following
if (UseAVX < 1) {
_features &= ~CPU_AVX;
_features &= ~CPU_VZEROUPPER;
}
should be updated to:
if (UseAVX < 1) {
_features &= ~CPU_AVX;
_features &= ~CPU_VZEROUPPER;
_features &= ~CPU_F16C;
}

Comment on lines +1997 to +2002
case Op_HF2FV:
case Op_F2HFV:
if (!VM_Version::supports_f16c() && !VM_Version::supports_avx512vl()) {
return false;
}
break;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be removed as match_rule_supported() has previously happened.

Comment on lines +3708 to +3710
int src_size = Matcher::vector_length_in_bytes(this, $src);
int dst_size = src_size * 2;
int vlen_enc = vector_length_encoding(dst_size);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could now be changed to:
int vlen_enc = Matcher::vector_length_encoding(this);

@openjdk
Copy link

openjdk bot commented Dec 6, 2022

@smita-kamath this pull request can not be integrated into master due to one or more merge conflicts. To resolve these merge conflicts and update this pull request you can run the following commands in the local repository for your personal fork:

git checkout JDK-8294588
git fetch https://git.openjdk.org/jdk master
git merge FETCH_HEAD
# resolve conflicts and follow the instructions given by git merge
git commit -m "Merge master"
git push

@openjdk openjdk bot added the merge-conflict Pull request has merge conflict with target branch label Dec 6, 2022
Comment on lines 1965 to 1966
void Assembler::vcvtph2ps(XMMRegister dst, XMMRegister src, int vector_len) {
assert(VM_Version::supports_avx512vl() || VM_Version::supports_f16c(), "");
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be VM_Version::supports_evex(). Also same for vcvtps2ph.

@openjdk openjdk bot removed the merge-conflict Pull request has merge conflict with target branch label Dec 6, 2022
@sviswa7
Copy link

sviswa7 commented Dec 6, 2022

@smita-kamath The patch looks good to me. You will need another review.
@vnkozlov could you please help review this patch?

@openjdk
Copy link

openjdk bot commented Dec 6, 2022

@smita-kamath This change now passes all automated pre-integration checks.

ℹ️ This project also has non-automated pre-integration requirements. Please see the file CONTRIBUTING.md for details.

After integration, the commit message for the final commit will be:

8294588: Auto vectorize half precision floating point conversion APIs

Reviewed-by: sviswanathan, kvn, jbhateja, fgao, xgong

You can use pull request commands such as /summary, /contributor and /issue to adjust it as needed.

At the time when this comment was updated there had been 42 new commits pushed to the master branch:

As there are no conflicts, your changes will automatically be rebased on top of these commits when integrating. If you prefer to avoid this automatic rebasing, please check the documentation for the /integrate command for further details.

As you do not have Committer status in this project an existing Committer must agree to sponsor your change. Possible candidates are the reviewers of this PR (@sviswa7, @vnkozlov, @jatin-bhateja, @XiaohongGong) but any other Committer may sponsor as well.

➡️ To flag this PR as ready for integration with the above commit message, type /integrate in a new comment. (Afterwards, your sponsor types /sponsor in a new comment to perform the integration).

@openjdk openjdk bot added the ready Pull request is ready to be integrated label Dec 6, 2022
Copy link
Contributor

@vnkozlov vnkozlov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changes are straight-forward but I have few comments.

And we need to test it again.

if (UseAVX < 1) {
_features &= ~CPU_AVX;
_features &= ~CPU_VZEROUPPER;
_features &= ~CPU_F16C;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is is_knights_family() supports f16c? We switch off some avx512 features for it. But it looks like f16c is not connected to avx512.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Vladimir, you're correct that f16c is not connected to avx512.

assert(VM_Version::supports_evex() || VM_Version::supports_f16c(), "");
InstructionMark im(this);
InstructionAttr attributes(vector_len, /* rex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /*uses_vl */ true);
attributes.set_address_attributes(/* tuple_type */ EVEX_HVM, /* input_size_in_bits */ EVEX_NObit);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it correct to set EVEX_* attributes in case EVEX is switched off (by UseAVX flag)?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or a CPU supports F16C but does not EVEX (avx512f).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Vladimir, we have a prior example of vpaddb instruction where these attributes are set. The assembler will ignore these attributes if UseAVX < 3.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good. Thank you for answering my questions.

}

@Test
@IR(counts = {IRNode.VECTOR_CAST_F2H, "> 0"}, applyIfCPUFeature = {"avx512f", "true"})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can add "FC16" also in the feature list and use applyIfCPUFeaturesOr

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is good suggestion.

* @bug 8294588
* @summary Auto-vectorize Float.floatToFloat16, Float.float16ToFloat API's
* @requires vm.compiler2.enabled
* @requires vm.cpu.features ~= ".*avx.*"
Copy link
Member

@jatin-bhateja jatin-bhateja Dec 7, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test may also execute on target if it has FC16, you can remove this CPU feature check. Feature since IR annotations already has a feature check.

Copy link
Member

@jatin-bhateja jatin-bhateja left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Verified my comments addressed. IR test is enabled for AVX, but can also be enabled for FC16 since some VM features can be selectively enabled in instances.

@smita-kamath
Copy link
Author

@vnkozlov I have addressed comments from Fei Gao and Xiaohong Gong. I have limited vectorization to avx2 and higher. If the changes look good to you, could you kindly run the tests? Thanks for all your help.

@vnkozlov
Copy link
Contributor

vnkozlov commented Dec 7, 2022

@vnkozlov I have addressed comments from Fei Gao and Xiaohong Gong. I have limited vectorization to avx2 and higher. If the changes look good to you, could you kindly run the tests? Thanks for all your help.

@smita-kamath, can you explain why it does not work with AVX1? If it really requires AVX2 then you should just disable F16C for (AVX < 2) instead of current (AVX < 1) in vm_version_x86.cpp. And you would not need to modify .ad file and test.

@smita-kamath
Copy link
Author

@vnkozlov you are right. It should work with AVX=1. I will make the changes. Thank you for your comment.

@smita-kamath
Copy link
Author

@vnkozlov I have updated the test case to work with AVX=1.

@vnkozlov
Copy link
Contributor

vnkozlov commented Dec 7, 2022

@vnkozlov I have updated the test case to work with AVX=1.

Can you explain what was wrong with AVX1 and what change fixed the issue?
I see you renamed classes and addressed @fg1417 comment about opcode. It is not clear to me what fixed AVX1 issue.

@sviswa7
Copy link

sviswa7 commented Dec 8, 2022

@vnkozlov The test was failing earlier with -XX:UseAVX=1 because the right implemented() check was not happening as Fei Gao explained. In vectornode.cpp, method VectorCastNode::implemented() was not getting the right vopc (VectorCastF2X, VectorCastS2X instead of VectorCastF2HF and VectorCastHF2F) after call to VectorCastNode::opcode() and so the Matcher::match_rule_supported_superword() was called with wrong vopc. This is now fixed as Smita has fixed the VectorCastNode::opcode() and VectorCastNode::implemented().

@vnkozlov
Copy link
Contributor

vnkozlov commented Dec 8, 2022

Thank you @sviswa7 for explanation! Good.

@vnkozlov
Copy link
Contributor

vnkozlov commented Dec 8, 2022

I started new testing after verifying locally that test passed with -XX:UseAVX=1.

@jatin-bhateja
Copy link
Member

@vnkozlov The test was failing earlier with -XX:UseAVX=1 because the right implemented() check was not happening as Fei Gao explained. In vectornode.cpp, method VectorCastNode::implemented() was not getting the right vopc (VectorCastF2X, VectorCastS2X instead of VectorCastF2HF and VectorCastHF2F) after call to VectorCastNode::opcode() and so the Matcher::match_rule_supported_superword() was called with wrong vopc. This is now fixed as Smita has fixed the VectorCastNode::opcode() and VectorCastNode::implemented().

Also, the IR test was only enabled for avx512f earlier, which some how over shadowed the problem. Since VM features are queried using CPUID hence matcher will give up if both F16C and AVX512F are not present. Hi @smita-kamath , we should not explicitly disable the F16C in vm_version.

@sviswa7
Copy link

sviswa7 commented Dec 8, 2022

@vnkozlov The test was failing earlier with -XX:UseAVX=1 because the right implemented() check was not happening as Fei Gao explained. In vectornode.cpp, method VectorCastNode::implemented() was not getting the right vopc (VectorCastF2X, VectorCastS2X instead of VectorCastF2HF and VectorCastHF2F) after call to VectorCastNode::opcode() and so the Matcher::match_rule_supported_superword() was called with wrong vopc. This is now fixed as Smita has fixed the VectorCastNode::opcode() and VectorCastNode::implemented().

Also, the IR test was only enabled for avx512f earlier, which some how over shadowed the problem. Since VM features are queried using CPUID hence matcher will give up if both F16C and AVX512F are not present. Hi @smita-kamath , we should not explicitly disable the F16C in vm_version.

@jatin-bhateja When User sets -XX:UseAVX=0 on command line F16C needs to be disabled explicitly (in vm_version) as it needs AVX support.

@jatin-bhateja
Copy link
Member

jatin-bhateja commented Dec 8, 2022

@vnkozlov The test was failing earlier with -XX:UseAVX=1 because the right implemented() check was not happening as Fei Gao explained. In vectornode.cpp, method VectorCastNode::implemented() was not getting the right vopc (VectorCastF2X, VectorCastS2X instead of VectorCastF2HF and VectorCastHF2F) after call to VectorCastNode::opcode() and so the Matcher::match_rule_supported_superword() was called with wrong vopc. This is now fixed as Smita has fixed the VectorCastNode::opcode() and VectorCastNode::implemented().

Also, the IR test was only enabled for avx512f earlier, which some how over shadowed the problem. Since VM features are queried using CPUID hence matcher will give up if both F16C and AVX512F are not present. Hi @smita-kamath , we should not explicitly disable the F16C in vm_version.

@jatin-bhateja When User sets -XX:UseAVX=0 on command line F16C needs to be disabled explicitly (in vm_version) as it needs AVX support.

Thanks you @sviswa7 for explanation!

@vnkozlov
Copy link
Contributor

vnkozlov commented Dec 8, 2022

Unfortunately I have to restart testing because JTREG version was update but I did not update my local repo which caused half of tests failed with "harness" error :^(

@vnkozlov
Copy link
Contributor

vnkozlov commented Dec 8, 2022

Good news is the test passed in this testing (hotspot vector testing passed a whole).

@smita-kamath
Copy link
Author

@vnkozlov, Thanks so much for running the tests. I really appreciate your help.

Copy link

@fg1417 fg1417 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your update. The change involving superword and vectornode parts looks good to me now.

Copy link

@XiaohongGong XiaohongGong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for the update!

Copy link
Contributor

@vnkozlov vnkozlov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Latest testing results are good.

@smita-kamath
Copy link
Author

@vnkozlov Thanks a lot for your review comments and for testing this patch.

@smita-kamath
Copy link
Author

/integrate

@openjdk openjdk bot added the sponsor Pull request is ready to be sponsored label Dec 8, 2022
@openjdk
Copy link

openjdk bot commented Dec 8, 2022

@smita-kamath
Your change (at version dc7d728) is now ready to be sponsored by a Committer.

@jatin-bhateja
Copy link
Member

/sponsor

@openjdk
Copy link

openjdk bot commented Dec 8, 2022

Going to push as commit 073897c.
Since your change was applied there have been 43 commits pushed to the master branch:

Your commit was automatically rebased without conflicts.

@openjdk openjdk bot added the integrated Pull request has been integrated label Dec 8, 2022
@openjdk openjdk bot closed this Dec 8, 2022
@openjdk openjdk bot removed ready Pull request is ready to be integrated rfr Pull request is ready for review sponsor Pull request is ready to be sponsored labels Dec 8, 2022
@openjdk
Copy link

openjdk bot commented Dec 8, 2022

@jatin-bhateja @smita-kamath Pushed as commit 073897c.

💡 You may see a message that your pull request was closed with unmerged commits. This can be safely ignored.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

hotspot-compiler hotspot-compiler-dev@openjdk.org integrated Pull request has been integrated

Development

Successfully merging this pull request may close these issues.

6 participants