Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

8296507: GCM using more memory than necessary with in-place operations #11121

Closed
wants to merge 446 commits into from

Conversation

ascarpino
Copy link
Contributor

@ascarpino ascarpino commented Nov 13, 2022

I would like a review of an update to the GCM code. A recent report showed that GCM memory usage for TLS was very large. This was a result of in-place buffers, which TLS uses, and how the code handled the combined intrinsic method during decryption. A temporary buffer was used because the combined intrinsic does gctr before ghash which results in a bad tag. The fix is to not use the combined intrinsic during in-place decryption and depend on the individual GHASH and CounterMode intrinsics. Direct ByteBuffers are not affected as they are not used by the intrinsics directly.

The reduction in the memory usage boosted performance back to where it was before despite using slower intrinsics (gctr & ghash individually). The extra memory allocation for the temporary buffer out-weighted the faster intrinsic.

    JDK 17:   122913.554 ops/sec
    JDK 19:    94885.008 ops/sec
    Post fix: 122735.804 ops/sec 

There is no regression test because this is a memory change and test coverage already existing.


Progress

  • Change must be properly reviewed (1 review required, with at least 1 Reviewer)
  • Change must not contain extraneous whitespace
  • Commit message must refer to an issue

Issue

  • JDK-8296507: GCM using more memory than necessary with in-place operations

Reviewers

Reviewing

Using git

Checkout this PR locally:
$ git fetch https://git.openjdk.org/jdk pull/11121/head:pull/11121
$ git checkout pull/11121

Update a local copy of the PR:
$ git checkout pull/11121
$ git pull https://git.openjdk.org/jdk pull/11121/head

Using Skara CLI tools

Checkout this PR locally:
$ git pr checkout 11121

View PR using the GUI difftool:
$ git pr show -t 11121

Using diff file

Download this PR as a diff file:
https://git.openjdk.org/jdk/pull/11121.diff

@bridgekeeper
Copy link

bridgekeeper bot commented Nov 13, 2022

👋 Welcome back ascarpino! A progress list of the required criteria for merging this PR into master will be added to the body of your pull request. There are additional pull request commands available for use with this pull request.

@openjdk
Copy link

openjdk bot commented Nov 13, 2022

@ascarpino The following label will be automatically applied to this pull request:

  • security

When this pull request is ready to be reviewed, an "RFR" email will be sent to the corresponding mailing list. If you would like to change these labels, use the /label pull request command.

@openjdk openjdk bot added the security security-dev@openjdk.org label Nov 13, 2022
@ascarpino ascarpino changed the title GCM using more memory than necessary with in-place operations 8296507: GCM using more memory than necessary with in-place operations Nov 14, 2022
@ascarpino ascarpino marked this pull request as ready for review November 15, 2022 22:41
@openjdk openjdk bot added the rfr Pull request is ready for review label Nov 15, 2022
@mlbridge
Copy link

mlbridge bot commented Nov 15, 2022

Webrevs

@j-baker
Copy link

j-baker commented Nov 16, 2022

It's possibly worth noting that while this is merely fixing a regression for x86, it's very likely a decent sized performance improvement on arm64, where intrinsics for AES-GCM (depending on JVM vendor) aren't added until after Java 17.

@carterkozak
Copy link
Contributor

Thanks for looking into this, @ascarpino!

In testing this using a local build, it improves performance in cases using heap buffers (a super-set of the socket case), however servers which use direct byte-buffers still exhibit a similar performance regression (heavy allocation compared to jdk17, ~10% slower TLS performance in HTTP+TLS benchmarks). It's possible that has a different root cause, but the outcome is strikingly similar.

@j-baker
Copy link

j-baker commented Nov 16, 2022

Carter, when I looked at this a few months back (admittedly I'm a fairly careless profiler and didn't fully dig down to a root cause) I felt as though direct bytebuffers were possibly getting compromised round about here:

int implGCMCrypt(ByteBuffer src, ByteBuffer dst) {
, where... essentially it's easier for intrinsics to operate on byte arrays and so any direct data passed in gets copied into a new byte array which is then passed into the intrinsic.

It's possible that I'm misunderstanding, however. I think one could test this hypothesis by adjusting the size of PARALLEL_LEN. Halving it will lead to a less efficient intrinsic usage but correspondingly halve the allocation rate.

* large chunks of data into 1MB sized chunks. This is to place
* an upper limit on the number of blocks encrypted in the intrinsic.
*
* For decrypting in-place byte[], calling methods must ct must set to null
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

end of sentence mangled

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"must ct must set to null" => "must set ct to null"?

private static int implGCMCrypt(byte[] in, int inOfs, int inLen, byte[] ct,
int ctOfs, byte[] out, int outOfs,
GCTR gctr, GHASH ghash) {

int len = 0;
if (inLen > SPLIT_LEN) {
// Loop if input length is greater than the SPLIT_LEN
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

comment doesn't add anything not already obvious from the code

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah.. probably right

while (inLen >= SPLIT_LEN) {
int partlen = implGCMCrypt0(in, inOfs + len, SPLIT_LEN, ct,
partlen = implGCMCrypt0(in, inOfs + len, SPLIT_LEN, ct,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not int partlen and get rid of line 594

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can reuse the same partlen for all loops through the while

ctOfs + len, out, outOfs + len, gctr, ghash);
len += partlen;
inLen -= partlen;
}
}

// Finish any remaining data
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

comment doesn't add anything special

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

@@ -666,6 +691,11 @@ abstract class GCMEngine {
byte[] originalOut = null;
int originalOutOfs = 0;

// True if op is in-place array decryption with the input & output
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

// Setting inPlaceArray to true turns off combined intrinsic processing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah that's better

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually the replacement isn't entirely accurate. This only applies to decryption and for buffers that don't overlap where input is ahead of output. That's why the comment is so wordy

gctr, ghash);
byte[] array;
if (encryption) {
array = dst.array();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could factor out lines 764 and 770 by changing line 762 to
byte[] array = encryption ? dst.array() : src.array();

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That was intentional since 763 checks the encryption boolean, I can define 'array' in that condition instead of having two conditions for the same thing

} else {
Unsafe.getUnsafe().setMemory(((DirectBuffer)dst).address(),
len + dst.position(), (byte)0);
// If this is an in-place array, don't zero the src
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment doesn't jive with the line of code on the next line. It is the inverse of the comment.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

@ascarpino
Copy link
Contributor Author

It's possibly worth noting that while this is merely fixing a regression for x86, it's very likely a decent sized performance improvement on arm64, where intrinsics for AES-GCM (depending on JVM vendor) aren't added until after Java 17.

That is why it refers to "combined intrinsic" more than spelling out AVX512. The change affects all platforms

@ascarpino
Copy link
Contributor Author

Thanks for looking into this, @ascarpino!

In testing this using a local build, it improves performance in cases using heap buffers (a super-set of the socket case), however servers which use direct byte-buffers still exhibit a similar performance regression (heavy allocation compared to jdk17, ~10% slower TLS performance in HTTP+TLS benchmarks). It's possible that has a different root cause, but the outcome is strikingly similar.

Well the provided test ran with heap bytebuffers and direct bytebuffers are handled differently because it has to copy data for the intrinsic. But that data allocation is pretty low and I believe was the same in 17. So I'm not aware of a direct bytebuffer slowdown as you now report

@carterkozak
Copy link
Contributor

Great point, I neglected to add benchmark coverage for the direct buffer case. I've updated my benchmark repository with a server using direct buffers:
https://github.com/carterkozak/java-crypto-allocation-performance/blob/develop/java-crypto-allocation-performance/src/main/java/com/palantir/java/crypto/allocations/DirectBufferTransportLayerSecurityBenchmark.java

@ascarpino
Copy link
Contributor Author

Thanks for looking into this, @ascarpino!

In testing this using a local build, it improves performance in cases using heap buffers (a super-set of the socket case), however servers which use direct byte-buffers still exhibit a similar performance regression (heavy allocation compared to jdk17, ~10% slower TLS performance in HTTP+TLS benchmarks). It's possible that has a different root cause, but the outcome is strikingly similar.

Looking at this, it's not related to the same in-place issues. This is a result of the combined intrinsics requirement . Maybe some better tuning can be done, but I think this is unavoidable. I can consider this in a future PR

@carterkozak
Copy link
Contributor

Looking at this, it's not related to the same in-place issues. This is a result of the combined intrinsics requirement . Maybe some better tuning can be done, but I think this is unavoidable. I can consider this in a future PR

That makes sense, perhaps we could document the finding in a new jira issue for posterity in case it impacts other folks as well? I can't overstate my appreciation for your work, thank you!

@ascarpino
Copy link
Contributor Author

btw. that last commit comment is wrong.. it's cleaning up from mcpowers's comments.

* For decrypting in-place byte[], calling methods must ct must set to null
* to avoid combined intrinsic, call GHASH directly before GCTR to avoid
* a bad tag exception. This check is not performed here because it would
* impose a check every operation which is less efficient.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing "for" after "check"?

null, 0, array, dst.arrayOffset() + dst.position(),
gctr, ghash);
} else {
int ofs = src.arrayOffset() + src.position();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this also used on line 774? Why not move this up and directly refer to it for both places?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The line 774 case only uses the calculated value once. I'm avoiding the unnecessary store & load operations when it is set the value to a variable. I see it when I run javap -c to view the bytecode. It's purely an optimization for performance. I do set it to the variable in the line 778 case because calculating the value twice is probably more expensive than once with the store & load operations.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see.

Copy link
Member

@jnimeh jnimeh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good, I only have nit comments.

* large chunks of data into 1MB sized chunks. This is to place
* an upper limit on the number of blocks encrypted in the intrinsic.
*
* For decrypting in-place byte[], calling methods must ct must set to null
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo nit? Should it be "calling methods must set ct to null"

} else {
Unsafe.getUnsafe().setMemory(((DirectBuffer)dst).address(),
len + dst.position(), (byte)0);
// If this is no an in-place array, zero the dst buffer
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: no -> not

@openjdk
Copy link

openjdk bot commented Dec 1, 2022

@ascarpino This change now passes all automated pre-integration checks.

ℹ️ This project also has non-automated pre-integration requirements. Please see the file CONTRIBUTING.md for details.

After integration, the commit message for the final commit will be:

8296507: GCM using more memory than necessary with in-place operations

Reviewed-by: jnimeh

You can use pull request commands such as /summary, /contributor and /issue to adjust it as needed.

At the time when this comment was updated there had been no new commits pushed to the master branch. If another commit should be pushed before you perform the /integrate command, your PR will be automatically rebased. If you prefer to avoid any potential automatic rebasing, please check the documentation for the /integrate command for further details.

➡️ To integrate this PR with the above commit message to the master branch, type /integrate in a new comment.

@openjdk openjdk bot added the ready Pull request is ready to be integrated label Dec 1, 2022
originalOutOfs = outOfs;
return new byte[out.length];
}
inPlaceArray = (!encryption);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the "inPlaceArray" reset somewhere? When inOfs >= outOfs and the function will return on line 1051, the inPlaceArray value will not be set on line 1053. Is this intentional? My vacation is coming up and I can't finish off this review before I leave. I see that Jamil has approved it. No need to hold up this for me. Thanks.

@openjdk
Copy link

openjdk bot commented Dec 6, 2022

@ascarpino this pull request can not be integrated into master due to one or more merge conflicts. To resolve these merge conflicts and update this pull request you can run the following commands in the local repository for your personal fork:

git checkout gcm
git fetch https://git.openjdk.org/jdk master
git merge FETCH_HEAD
# resolve conflicts and follow the instructions given by git merge
git commit -m "Merge master"
git push

@openjdk openjdk bot added merge-conflict Pull request has merge conflict with target branch and removed ready Pull request is ready to be integrated labels Dec 6, 2022
@openjdk openjdk bot removed merge-conflict Pull request has merge conflict with target branch rfr Pull request is ready for review labels Dec 6, 2022
@openjdk openjdk bot added ready Pull request is ready to be integrated rfr Pull request is ready for review labels Dec 6, 2022
@ascarpino
Copy link
Contributor Author

/integrate

@openjdk
Copy link

openjdk bot commented Dec 6, 2022

Going to push as commit b4da0ee.
Since your change was applied there have been 3 commits pushed to the master branch:

  • cd2182a: 8295724: VirtualMachineError: Out of space in CodeCache for method handle intrinsic
  • 2cdc019: 8298178: Update to use jtreg 7.1.1
  • 79d163d: 8293412: Remove unnecessary java.security.egd overrides

Your commit was automatically rebased without conflicts.

@openjdk openjdk bot added the integrated Pull request has been integrated label Dec 6, 2022
@openjdk openjdk bot closed this Dec 6, 2022
@openjdk openjdk bot removed ready Pull request is ready to be integrated rfr Pull request is ready for review labels Dec 6, 2022
@openjdk
Copy link

openjdk bot commented Dec 6, 2022

@ascarpino Pushed as commit b4da0ee.

💡 You may see a message that your pull request was closed with unmerged commits. This can be safely ignored.

@theRealAph
Copy link
Contributor

What benchmark was this? How large were the buffers?

@carterkozak
Copy link
Contributor

@theRealAph I reported this oddity to the mailing list including a benchmark which I later updated to include additional coverage for direct buffers

@ascarpino ascarpino deleted the gcm branch February 14, 2025 21:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
integrated Pull request has been integrated security security-dev@openjdk.org
Development

Successfully merging this pull request may close these issues.