Skip to content

Conversation

@erifan
Copy link
Contributor

@erifan erifan commented Sep 10, 2025

The AArch64 SVE and SVE2 architectures lack an instruction suitable for subword-type compress operations. Therefore, the current implementation uses the 32-bit SVE compact instruction to compress subword types by first widening the high and low parts to 32 bits, compressing them, and then narrowing them back to their original type. Finally, the high and low parts are merged using the index + tbl instructions.

This approach is significantly slower compared to architectures with native support. After evaluating all available AArch64 SVE instructions and experimenting with various implementations—such as looping over the active elements, extraction, and insertion—I confirmed that the existing algorithm is optimal given the instruction set. However, there is still room for optimization in the following two aspects:

  1. Merging with index + tbl is suboptimal due to the high latency of the index instruction.
  2. For partial subword types, operations to the highest half are unnecessary because those bits are invalid.

This pull request introduces the following changes:

  1. Replaces index + tbl with the whilelt + splice instructions, which offer lower latency and higher throughput.
  2. Eliminates unnecessary compress operations for partial subword type cases.
  3. For sve_compress_byte, one less temporary register is used to alleviate potential register pressure.

Benchmark results demonstrate that these changes significantly improve performance.

Benchmarks on Nvidia Grace machine with 128-bit SVE:

Benchmark	            Unit	Before	 Error	After	 Error	Uplift
Byte128Vector.compress	ops/ms	4846.97	 26.23	6638.56	 31.60	1.36
Byte64Vector.compress	ops/ms	2447.69	 12.95	7167.68	 34.49	2.92
Short128Vector.compress	ops/ms	7174.88	 40.94	8398.45	 9.48	1.17
Short64Vector.compress	ops/ms	3618.72	 3.04	8618.22	 10.91	2.38

This PR was tested on 128-bit, 256-bit, and 512-bit SVE environments, and all tests passed.


Progress

  • Change must be properly reviewed (1 review required, with at least 1 Reviewer)
  • Change must not contain extraneous whitespace
  • Commit message must refer to an issue

Issue

  • JDK-8366333: AArch64: Enhance SVE subword type implementation of vector compress (Enhancement - P4)

Reviewers

Contributors

  • Jatin Bhateja <jbhateja@openjdk.org>

Reviewing

Using git

Checkout this PR locally:
$ git fetch https://git.openjdk.org/jdk.git pull/27188/head:pull/27188
$ git checkout pull/27188

Update a local copy of the PR:
$ git checkout pull/27188
$ git pull https://git.openjdk.org/jdk.git pull/27188/head

Using Skara CLI tools

Checkout this PR locally:
$ git pr checkout 27188

View PR using the GUI difftool:
$ git pr show -t 27188

Using diff file

Download this PR as a diff file:
https://git.openjdk.org/jdk/pull/27188.diff

Using Webrev

Link to Webrev Comment

…ompress

The AArch64 SVE and SVE2 architectures lack an instruction suitable for
subword-type `compress` operations. Therefore, the current implementation
uses the 32-bit SVE `compact` instruction to compress subword types by
first widening the high and low parts to 32 bits, compressing them, and
then narrowing them back to their original type. Finally, the high and
low parts are merged using the `index + tbl` instructions.

This approach is significantly slower compared to architectures with native
support. After evaluating all available AArch64 SVE instructions and
experimenting with various implementations—such as looping over the active
elements, extraction, and insertion—I confirmed that the existing algorithm
is optimal given the instruction set. However, there is still room for
optimization in the following two aspects:
1. Merging with `index + tbl` is suboptimal due to the high latency of
the `index` instruction.
2. For partial subword types, operations to the highest half are unnecessary
because those bits are invalid.

This pull request introduces the following changes:
1. Replaces `index + tbl` with the `whilelt + splice` instructions, which
offer lower latency and higher throughput.
2. Eliminates unnecessary compress operations for partial subword type cases.
3. For `sve_compress_byte`, one less temporary register is used to alleviate
potential register pressure.

Benchmark results demonstrate that these changes significantly improve performance.

Benchmarks on Nvidia Grace machine with 128-bit SVE:
```
Benchmark	        Unit	Before	 Error	After	 Error	Uplift
Byte128Vector.compress	ops/ms	4846.97	 26.23	6638.56	 31.60	1.36
Byte64Vector.compress	ops/ms	2447.69	 12.95	7167.68	 34.49	2.92
Short128Vector.compress	ops/ms	7174.88	 40.94	8398.45	 9.48	1.17
Short64Vector.compress	ops/ms	3618.72	 3.04	8618.22	 10.91	2.38
```

This PR was tested on 128-bit, 256-bit, and 512-bit SVE environments,
and all tests passed.
@bridgekeeper
Copy link

bridgekeeper bot commented Sep 10, 2025

👋 Welcome back erifan! A progress list of the required criteria for merging this PR into master will be added to the body of your pull request. There are additional pull request commands available for use with this pull request.

@openjdk
Copy link

openjdk bot commented Sep 10, 2025

@erifan This change now passes all automated pre-integration checks.

ℹ️ This project also has non-automated pre-integration requirements. Please see the file CONTRIBUTING.md for details.

After integration, the commit message for the final commit will be:

8366333: AArch64: Enhance SVE subword type implementation of vector compress

Co-authored-by: Jatin Bhateja <jbhateja@openjdk.org>
Reviewed-by: jbhateja, xgong, galder, vlivanov

You can use pull request commands such as /summary, /contributor and /issue to adjust it as needed.

At the time when this comment was updated there had been 203 new commits pushed to the master branch:

As there are no conflicts, your changes will automatically be rebased on top of these commits when integrating. If you prefer to avoid this automatic rebasing, please check the documentation for the /integrate command for further details.

As you do not have Committer status in this project an existing Committer must agree to sponsor your change. Possible candidates are the reviewers of this PR (@eme64, @iwanowww, @jatin-bhateja, @XiaohongGong) but any other Committer may sponsor as well.

➡️ To flag this PR as ready for integration with the above commit message, type /integrate in a new comment. (Afterwards, your sponsor types /sponsor in a new comment to perform the integration).

@openjdk
Copy link

openjdk bot commented Sep 10, 2025

@erifan this pull request can not be integrated into master due to one or more merge conflicts. To resolve these merge conflicts and update this pull request you can run the following commands in the local repository for your personal fork:

git checkout JDK-8366333-compress
git fetch https://git.openjdk.org/jdk.git master
git merge FETCH_HEAD
# resolve conflicts and follow the instructions given by git merge
git commit -m "Merge master"
git push

@openjdk openjdk bot added the merge-conflict Pull request has merge conflict with target branch label Sep 10, 2025
@openjdk
Copy link

openjdk bot commented Sep 10, 2025

@erifan The following label will be automatically applied to this pull request:

  • hotspot-compiler

When this pull request is ready to be reviewed, an "RFR" email will be sent to the corresponding mailing list. If you would like to change these labels, use the /label pull request command.

@openjdk openjdk bot added hotspot-compiler hotspot-compiler-dev@openjdk.org rfr Pull request is ready for review labels Sep 10, 2025
@mlbridge
Copy link

mlbridge bot commented Sep 10, 2025

Webrevs

@galderz
Copy link
Contributor

galderz commented Sep 11, 2025

Would it make sense to additionally run the relevant benchmarks on other popular aarch64 platforms such as Graviton, to make sure the improvements are seen there as well?

@openjdk openjdk bot removed the merge-conflict Pull request has merge conflict with target branch label Sep 15, 2025
@erifan
Copy link
Contributor Author

erifan commented Sep 15, 2025

@galderz Yeah, absolutely. This is the test results on an AWS graviton3 V1 machine, we can see similar performance gain.

Benchmark Units Before Error After Error Uplift
Byte128Vector.compress ops/ms 2405.511 0.763 6116.85 17.699 2.54284848
Byte64Vector.compress ops/ms 1151.662 11.262 5278.924 6.74 4.58374419
Double128Vector.compress ops/ms 4919.017 4.909 4940.232 20.143 1.00431285
Double64Vector.compress ops/ms 37.071 0.778 37.109 0.945 1.00102506
Float128Vector.compress ops/ms 9580.312 48.341 9586.499 74.934 1.0006458
Float64Vector.compress ops/ms 4943.728 7.361 4941.917 5.871 0.99963368
Int128Vector.compress ops/ms 9496.991 34.972 9515.122 29.204 1.00190913
Int64Vector.compress ops/ms 4940.23 7.141 4941.815 5.077 1.00032084
Long128Vector.compress ops/ms 4918.142 14.835 4917.148 9.05 0.99979789
Long64Vector.compress ops/ms 36.58 0.426 36.574 0.431 0.99983598
Short128Vector.compress ops/ms 3343.878 0.898 6813.421 4.143 2.03758062
Short64Vector.compress ops/ms 1595.358 3.37 3390.959 3.55 2.12551603

Copy link
Contributor

@eme64 eme64 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Drive-by comments, going on vacation soon so don't depend on me fully reviewing this any time soon ;)

@eme64
Copy link
Contributor

eme64 commented Sep 18, 2025

@erifan I'm going to be out of the office for 3 weeks, so feel free to ask others for reviews :)

Copy link
Contributor Author

@erifan erifan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your review @eme64 . Have a nice trip!

// Example input: src = q p n m l k j i h g f e d c b a, one character is 8 bits.
// mask = 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0 1, one character is 1 bit.
// Expected result: dst = 0 0 0 0 0 0 0 0 0 0 0 p i g c a
sve_dup(vtmp3, B, 0);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For clarity, you could declare a local FloatRegister vzr = vtmp3 and refer to it at all use sites. That would make things clearer.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, thanks!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The following reads slightly better, but it's up to you how to shape it.

FloatRegister vzr = vtmp3;
sve_dup(vzr, B, 0);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, thanks!

void C2_MacroAssembler::sve_compress_short(FloatRegister dst, FloatRegister src, PRegister mask,
FloatRegister vtmp1, FloatRegister vtmp2,
PRegister pgtmp) {
FloatRegister vtmp, FloatRegister vtmp_zr,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On code style: it's confusing to see a temp register used in non-destructive way to pass a constant. If you want to save on materializing an all 0 vector constant, I suggest to name it differently (e.g., zr) and put the argument before vtmp.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Copy link
Contributor Author

@erifan erifan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@iwanowww I have addressed all of your suggestions, thanks for your review.

// Example input: src = q p n m l k j i h g f e d c b a, one character is 8 bits.
// mask = 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0 1, one character is 1 bit.
// Expected result: dst = 0 0 0 0 0 0 0 0 0 0 0 p i g c a
sve_dup(vtmp3, B, 0);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, thanks!

void C2_MacroAssembler::sve_compress_short(FloatRegister dst, FloatRegister src, PRegister mask,
FloatRegister vtmp1, FloatRegister vtmp2,
PRegister pgtmp) {
FloatRegister vtmp, FloatRegister vtmp_zr,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Copy link
Contributor

@iwanowww iwanowww left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good.

// Example input: src = q p n m l k j i h g f e d c b a, one character is 8 bits.
// mask = 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0 1, one character is 1 bit.
// Expected result: dst = 0 0 0 0 0 0 0 0 0 0 0 p i g c a
sve_dup(vtmp3, B, 0);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The following reads slightly better, but it's up to you how to shape it.

FloatRegister vzr = vtmp3;
sve_dup(vzr, B, 0);

@openjdk openjdk bot added the ready Pull request is ready to be integrated label Sep 29, 2025

@Test
@IR(counts = { IRNode.COMPRESS_VB, "= 1" },
applyIfCPUFeature = { "sve", "true" })
Copy link
Member

@jatin-bhateja jatin-bhateja Sep 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @erifan,
Nice work!,
Can you please also enable these tests for x86? Following are the relevant features.

CompressVB -> avx512_vbmi2, avx512_vl
CompressVS -> avx512_vbmi2. avx512_vl
CompressVI/VF -> avx512f, avx512vl
ComprssVL/VD -> avx512f, avx512vl

PS: avx512_vbmi2 is missing from test/IREncodingPrinter.java

FYI , currently, we don't support sub-word compression intrinsics on AVX2/E-core targets. I created a vectorized algorithm without any x86 backend change just using vector APIs, and it showed 12x improvement.

https://github.com/jatin-bhateja/external_staging/blob/main/VectorizedAlgos/SubwordCompress/short_vector_compress.java

PROMPT>java -cp .  --add-modules=jdk.incubator.vector short_vector_compress 0
WARNING: Using incubator modules: jdk.incubator.vector
[ baseline time] 976 ms  [res] 429507073
PROMPT>java -cp .  --add-modules=jdk.incubator.vector short_vector_compress 1
WARNING: Using incubator modules: jdk.incubator.vector
[ withopt time] 80 ms  [res] 429507073
PROMPT>

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, please help me check if it is correct, thank you! I have tested it locally.

@openjdk openjdk bot removed the ready Pull request is ready to be integrated label Oct 7, 2025
@erifan
Copy link
Contributor Author

erifan commented Oct 7, 2025

Hi @iwanowww @jatin-bhateja I have addressed your comments, thanks for your review!

Copy link
Member

@jatin-bhateja jatin-bhateja left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @erifan ,
Verified IR test changes.

@openjdk openjdk bot added the ready Pull request is ready to be integrated label Oct 7, 2025
@erifan
Copy link
Contributor Author

erifan commented Oct 8, 2025

/contributor add @jatin-bhateja

@openjdk
Copy link

openjdk bot commented Oct 8, 2025

@erifan
Contributor Jatin Bhateja <jbhateja@openjdk.org> successfully added.

@erifan
Copy link
Contributor Author

erifan commented Oct 15, 2025

Hi, can I integrate this patch now? Could any Oracle friends help me with internal testing of this patch? Thanks~

Copy link

@XiaohongGong XiaohongGong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks! Reviewed internally.

@erifan
Copy link
Contributor Author

erifan commented Oct 20, 2025

I have tested a lot of different configurations on both aarch64 and x64, including 128/256/512 bits SVE2/SVE/NEON, AVX3/2/1, SSE4/3/2/1. All tests passed, so I'll integrate the PR, thanks for all!
/integrate

@openjdk openjdk bot added the sponsor Pull request is ready to be sponsored label Oct 20, 2025
@openjdk
Copy link

openjdk bot commented Oct 20, 2025

@erifan
Your change (at version c75df30) is now ready to be sponsored by a Committer.

@XiaohongGong
Copy link

/sponsor

@openjdk
Copy link

openjdk bot commented Oct 21, 2025

Going to push as commit 2de8d58.
Since your change was applied there have been 218 commits pushed to the master branch:

Your commit was automatically rebased without conflicts.

@openjdk openjdk bot added the integrated Pull request has been integrated label Oct 21, 2025
@openjdk openjdk bot closed this Oct 21, 2025
@openjdk openjdk bot removed ready Pull request is ready to be integrated rfr Pull request is ready for review sponsor Pull request is ready to be sponsored labels Oct 21, 2025
@openjdk
Copy link

openjdk bot commented Oct 21, 2025

@XiaohongGong @erifan Pushed as commit 2de8d58.

💡 You may see a message that your pull request was closed with unmerged commits. This can be safely ignored.

@erifan erifan deleted the JDK-8366333-compress branch October 21, 2025 01:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

hotspot-compiler hotspot-compiler-dev@openjdk.org integrated Pull request has been integrated

Development

Successfully merging this pull request may close these issues.

6 participants