-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
8296507: GCM using more memory than necessary with in-place operations #11121
Conversation
👋 Welcome back ascarpino! A progress list of the required criteria for merging this PR into |
@ascarpino The following label will be automatically applied to this pull request:
When this pull request is ready to be reviewed, an "RFR" email will be sent to the corresponding mailing list. If you would like to change these labels, use the /label pull request command. |
Webrevs
|
It's possibly worth noting that while this is merely fixing a regression for x86, it's very likely a decent sized performance improvement on arm64, where intrinsics for AES-GCM (depending on JVM vendor) aren't added until after Java 17. |
Thanks for looking into this, @ascarpino! In testing this using a local build, it improves performance in cases using heap buffers (a super-set of the socket case), however servers which use direct byte-buffers still exhibit a similar performance regression (heavy allocation compared to jdk17, ~10% slower TLS performance in HTTP+TLS benchmarks). It's possible that has a different root cause, but the outcome is strikingly similar. |
Carter, when I looked at this a few months back (admittedly I'm a fairly careless profiler and didn't fully dig down to a root cause) I felt as though direct bytebuffers were possibly getting compromised round about here:
It's possible that I'm misunderstanding, however. I think one could test this hypothesis by adjusting the size of PARALLEL_LEN. Halving it will lead to a less efficient intrinsic usage but correspondingly halve the allocation rate. |
* large chunks of data into 1MB sized chunks. This is to place | ||
* an upper limit on the number of blocks encrypted in the intrinsic. | ||
* | ||
* For decrypting in-place byte[], calling methods must ct must set to null |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
end of sentence mangled
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"must ct must set to null" => "must set ct to null"?
private static int implGCMCrypt(byte[] in, int inOfs, int inLen, byte[] ct, | ||
int ctOfs, byte[] out, int outOfs, | ||
GCTR gctr, GHASH ghash) { | ||
|
||
int len = 0; | ||
if (inLen > SPLIT_LEN) { | ||
// Loop if input length is greater than the SPLIT_LEN |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
comment doesn't add anything not already obvious from the code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah.. probably right
while (inLen >= SPLIT_LEN) { | ||
int partlen = implGCMCrypt0(in, inOfs + len, SPLIT_LEN, ct, | ||
partlen = implGCMCrypt0(in, inOfs + len, SPLIT_LEN, ct, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not int partlen
and get rid of line 594
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can reuse the same partlen for all loops through the while
ctOfs + len, out, outOfs + len, gctr, ghash); | ||
len += partlen; | ||
inLen -= partlen; | ||
} | ||
} | ||
|
||
// Finish any remaining data |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
comment doesn't add anything special
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
@@ -666,6 +691,11 @@ abstract class GCMEngine { | |||
byte[] originalOut = null; | |||
int originalOutOfs = 0; | |||
|
|||
// True if op is in-place array decryption with the input & output |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Setting inPlaceArray
to true turns off combined intrinsic processing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah that's better
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually the replacement isn't entirely accurate. This only applies to decryption and for buffers that don't overlap where input is ahead of output. That's why the comment is so wordy
gctr, ghash); | ||
byte[] array; | ||
if (encryption) { | ||
array = dst.array(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could factor out lines 764 and 770 by changing line 762 to
byte[] array = encryption ? dst.array() : src.array();
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That was intentional since 763 checks the encryption boolean, I can define 'array' in that condition instead of having two conditions for the same thing
} else { | ||
Unsafe.getUnsafe().setMemory(((DirectBuffer)dst).address(), | ||
len + dst.position(), (byte)0); | ||
// If this is an in-place array, don't zero the src |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment doesn't jive with the line of code on the next line. It is the inverse of the comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
That is why it refers to "combined intrinsic" more than spelling out AVX512. The change affects all platforms |
Well the provided test ran with heap bytebuffers and direct bytebuffers are handled differently because it has to copy data for the intrinsic. But that data allocation is pretty low and I believe was the same in 17. So I'm not aware of a direct bytebuffer slowdown as you now report |
Great point, I neglected to add benchmark coverage for the direct buffer case. I've updated my benchmark repository with a server using direct buffers: |
Looking at this, it's not related to the same in-place issues. This is a result of the combined intrinsics requirement . Maybe some better tuning can be done, but I think this is unavoidable. I can consider this in a future PR |
That makes sense, perhaps we could document the finding in a new jira issue for posterity in case it impacts other folks as well? I can't overstate my appreciation for your work, thank you! |
btw. that last commit comment is wrong.. it's cleaning up from mcpowers's comments. |
* For decrypting in-place byte[], calling methods must ct must set to null | ||
* to avoid combined intrinsic, call GHASH directly before GCTR to avoid | ||
* a bad tag exception. This check is not performed here because it would | ||
* impose a check every operation which is less efficient. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing "for" after "check"?
null, 0, array, dst.arrayOffset() + dst.position(), | ||
gctr, ghash); | ||
} else { | ||
int ofs = src.arrayOffset() + src.position(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this also used on line 774? Why not move this up and directly refer to it for both places?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The line 774 case only uses the calculated value once. I'm avoiding the unnecessary store & load operations when it is set the value to a variable. I see it when I run javap -c to view the bytecode. It's purely an optimization for performance. I do set it to the variable in the line 778 case because calculating the value twice is probably more expensive than once with the store & load operations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good, I only have nit comments.
* large chunks of data into 1MB sized chunks. This is to place | ||
* an upper limit on the number of blocks encrypted in the intrinsic. | ||
* | ||
* For decrypting in-place byte[], calling methods must ct must set to null |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo nit? Should it be "calling methods must set ct to null"
} else { | ||
Unsafe.getUnsafe().setMemory(((DirectBuffer)dst).address(), | ||
len + dst.position(), (byte)0); | ||
// If this is no an in-place array, zero the dst buffer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: no -> not
@ascarpino This change now passes all automated pre-integration checks. ℹ️ This project also has non-automated pre-integration requirements. Please see the file CONTRIBUTING.md for details. After integration, the commit message for the final commit will be:
You can use pull request commands such as /summary, /contributor and /issue to adjust it as needed. At the time when this comment was updated there had been no new commits pushed to the ➡️ To integrate this PR with the above commit message to the |
originalOutOfs = outOfs; | ||
return new byte[out.length]; | ||
} | ||
inPlaceArray = (!encryption); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the "inPlaceArray" reset somewhere? When inOfs >= outOfs and the function will return on line 1051, the inPlaceArray value will not be set on line 1053. Is this intentional? My vacation is coming up and I can't finish off this review before I leave. I see that Jamil has approved it. No need to hold up this for me. Thanks.
Reviewed-by: sviswanathan, vlivanov
Reviewed-by: dholmes
…tOutputStream0 when no content-length in response Reviewed-by: simonis, dfuchs
Reviewed-by: weijun
Reviewed-by: alanb, jpai
…en fastdebug is used Reviewed-by: stuefe, serb
…erOfTrailingZeros/numberOfLeadingZeros()` Reviewed-by: kvn, thartmann
Reviewed-by: kbarrett, sjohanss
Reviewed-by: tschatzl
…ion reported in the error message Reviewed-by: jpai
…g pattern matching Reviewed-by: jlahoda
Reviewed-by: sviswanathan, ascarpino, jnimeh
Reviewed-by: sundar, jlahoda
@ascarpino this pull request can not be integrated into git checkout gcm
git fetch https://git.openjdk.org/jdk master
git merge FETCH_HEAD
# resolve conflicts and follow the instructions given by git merge
git commit -m "Merge master"
git push |
/integrate |
Going to push as commit b4da0ee.
Your commit was automatically rebased without conflicts. |
@ascarpino Pushed as commit b4da0ee. 💡 You may see a message that your pull request was closed with unmerged commits. This can be safely ignored. |
What benchmark was this? How large were the buffers? |
@theRealAph I reported this oddity to the mailing list including a benchmark which I later updated to include additional coverage for direct buffers |
I would like a review of an update to the GCM code. A recent report showed that GCM memory usage for TLS was very large. This was a result of in-place buffers, which TLS uses, and how the code handled the combined intrinsic method during decryption. A temporary buffer was used because the combined intrinsic does gctr before ghash which results in a bad tag. The fix is to not use the combined intrinsic during in-place decryption and depend on the individual GHASH and CounterMode intrinsics. Direct ByteBuffers are not affected as they are not used by the intrinsics directly.
The reduction in the memory usage boosted performance back to where it was before despite using slower intrinsics (gctr & ghash individually). The extra memory allocation for the temporary buffer out-weighted the faster intrinsic.
There is no regression test because this is a memory change and test coverage already existing.
Progress
Issue
Reviewers
Reviewing
Using
git
Checkout this PR locally:
$ git fetch https://git.openjdk.org/jdk pull/11121/head:pull/11121
$ git checkout pull/11121
Update a local copy of the PR:
$ git checkout pull/11121
$ git pull https://git.openjdk.org/jdk pull/11121/head
Using Skara CLI tools
Checkout this PR locally:
$ git pr checkout 11121
View PR using the GUI difftool:
$ git pr show -t 11121
Using diff file
Download this PR as a diff file:
https://git.openjdk.org/jdk/pull/11121.diff