[SPARK-56227][CORE] Fix GcmTransportCipher to correctly handle multiple messages per channel#55028
Open
aajisaka wants to merge 1 commit intoapache:masterfrom
Open
[SPARK-56227][CORE] Fix GcmTransportCipher to correctly handle multiple messages per channel#55028aajisaka wants to merge 1 commit intoapache:masterfrom
aajisaka wants to merge 1 commit intoapache:masterfrom
Conversation
Member
Author
|
Converted to draft. We are seeing job failures in benchmarking test. |
…le messages per channel
Three bugs in `GcmTransportCipher` cause failures in production YARN clusters
when AES-GCM RPC encryption is enabled (`spark.network.crypto.cipher=AES/GCM/NoPadding`).
**Bug 1 — DecryptionHandler is single-use per channel (YARN container launch failure)**
After decoding the first post-auth message, `completed = true` was never reset.
`AesGcmHkdfStreaming` is a one-shot streaming primitive: each GCM message carries its
own random IV and requires a fresh `StreamSegmentDecrypter`. With `decrypter` declared
`final` and all guard flags stuck at their terminal values, every subsequent message
on the channel was silently discarded.
Fix: make `decrypter` non-final, add `resetForNextMessage()` that reinstates all
per-message state (including a fresh `StreamSegmentDecrypter`), and call it after each
successfully decoded message.
**Bug 2 — TCP-coalesced messages lost (SparkSQL IllegalStateException)**
When TCP delivers multiple back-to-back GCM messages in a single `channelRead()` call
(common under shuffle load), the old code released the `ByteBuf` after decoding the
first message, discarding any remaining bytes. The next `channelRead()` then read bytes
from the middle of the second message as its length header, producing a negative `long`
and throwing `IllegalStateException("Invalid expected ciphertext length")`.
Fix: wrap the decode logic in an outer `while(true)` loop that exhausts all complete
messages from the buffer before releasing it; call `resetForNextMessage()` inside the
loop between messages.
**Bug 3 — TCP-fragmented frame header causes IndexOutOfBoundsException (benchmark)**
`ByteBuf.readBytes(ByteBuffer dst)` requires exactly `dst.remaining()` bytes to be
present and throws `IndexOutOfBoundsException` if the source is shorter. Under high
load, TCP can fragment a GCM message's 24-byte internal header (or 8-byte length prefix)
across multiple `channelRead()` calls. The code incorrectly assumed `readBytes` would
stop early and leave `hasRemaining() == true`.
Fix: compute `toRead = Math.min(readable, dst.remaining())`, temporarily narrow
`dst.limit` to `position + toRead`, call `readBytes(dst)`, then restore `limit`.
**Bug 4 — EncryptionHandler shares mutable buffers across GcmEncryptedMessage instances**
`plaintextBuffer` and `ciphertextBuffer` were `EncryptionHandler` fields reused across
all `GcmEncryptedMessage` instances. Under Netty's write pipeline a new message can be
constructed (via `write()`) before a prior one's `transferTo()` completes; the new
constructor's `ciphertextBuffer.limit(0)` would corrupt the in-flight message's buffer.
Fix: allocate `plaintextBuffer` and `ciphertextBuffer` inside the `GcmEncryptedMessage`
constructor so each message owns its own buffers.
- Cache `headerLength` in `DecryptionHandler` to avoid repeated `getHeaderLength()` calls
- Replace `Integer.min()` with `Math.min()` for style consistency
- `testMultipleMessages`: regression for Bug 1 — same `DecryptionHandler` decodes two
independent messages delivered via separate `channelRead()` calls
- `testBatchedMessages`: regression for Bug 2 — two ciphertexts concatenated into one
`ByteBuf` and delivered in a single `channelRead()` call
- `testSplitHeader`: regression for Bug 3 — ciphertext split at byte 12 (8-byte length
field + 4 bytes into the 24-byte GCM header) across two `channelRead()` calls
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
7b33cf6 to
ed42963
Compare
Member
Author
|
Our internal benchmark tests passed in YARN cluster. This patch is ready for review. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What changes were proposed in this pull request?
This fixes three bugs in
GcmTransportCipherintroduced by SPARK-47172.Bug 1:
DecryptionHandlersilently drops every message after the first.AesGcmHkdfStreamingis a one-shot streaming primitive — each independently encrypted message carries its own random IV and requires a freshStreamSegmentDecrypter. TheDecryptionHandlernever reset its per-message state (completed,decrypterInit,expectedLength,segmentNumber, etc.) nor replaced the singlefinal StreamSegmentDecrypterinstance between messages. After the first message was decoded,completed = truepermanently, and all subsequent messages were silently dropped because bothinitalizeExpectedLength()andinitalizeDecrypter()returned early as no-ops and the inner while loop never ran.Fix: add
resetForNextMessage()which clears all per-message fields and allocates a newStreamSegmentDecrypter; call it after each fully decoded message.Bug 2:
DecryptionHandlerdiscards bytes from messages batched in the samechannelRead()call.Under shuffle load, TCP coalesces multiple encrypted messages into a single
ByteBuf. The original code exited the decryption loop as soon as one message completed and released the buffer — including any trailing bytes belonging to subsequent messages. The nextchannelRead()then received bytes starting mid-stream of the second message, interpreted them as an 8-byte length header, and threw:IllegalStateException: Invalid expected ciphertext length.Fix: wrap the decryption logic in an outer loop that continues consuming messages from the same buffer until either the buffer is exhausted or a partial message is encountered.
resetForNextMessage()is called inside the loop immediately after each complete message while the buffer is still held.Bug 3: TCP-fragmented frame header causes IndexOutOfBoundsException
ByteBuf.readBytes(ByteBuffer dst)requires exactlydst.remaining()bytes to be present and throwsIndexOutOfBoundsExceptionif the source is shorter. Under high load, TCP can fragment a GCM message's 24-byte internal header (or 8-byte length prefix) across multiplechannelRead()calls. The code incorrectly assumedreadByteswould stop early and leavehasRemaining() == true.Fix: compute
toRead = Math.min(readable, dst.remaining()), temporarily narrowdst.limittoposition + toRead, callreadBytes(dst), then restorelimit.Bug 4 (minor):
EncryptionHandlershares working buffers across concurrentGcmEncryptedMessageinstances.plaintextBufferandciphertextBufferwere fields ofEncryptionHandlerpassed into everyGcmEncryptedMessage. The constructor'sciphertextBuffer.limit(0)call could corrupt an in-flight message's buffer state if Netty batched writes. Fix: move buffer ownership intoGcmEncryptedMessageso each message allocates its own working buffers.Without the above fixes, enabling
AES/GCM/NoPaddingRPC encryption causes YARN executor containers to fail: the auth handshake succeeds but all post-auth RPC messages are dropped or corrupted, leaving the channel hung until YARN kills the container.Why are the changes needed?
To successfully run Spark jobs on YARN with
spark.network.crypto.cipher="AES/GCM/NoPadding"Fixes #54999
Does this PR introduce any user-facing change?
No
How was this patch tested?
Added unit tests:
testMultipleMessages: encrypts and decrypts two independent messages through the same handler pair with separatechannelRead()calls.testBatchedMessages: concatenates two ciphertexts into oneByteBufand delivers them in a singlechannelRead()call, verifying both are decoded correctly.testSplitHeader: ciphertext split at byte 12 (8-byte length field + 4 bytes into the 24-byte GCM header) across twochannelRead()calls.Ported these changes to our Spark 3.4.x-based internal branch and ran multiple jobs in YARN cluster successfully.
Was this patch authored or co-authored using generative AI tooling?
Generated-by: Claude Code (Claude Opus 4.6)