perf: add spark.comet.exec.shuffle.maxBufferedBatches config#3800
Closed
andygrove wants to merge 2 commits intoapache:mainfrom
Closed
perf: add spark.comet.exec.shuffle.maxBufferedBatches config#3800andygrove wants to merge 2 commits intoapache:mainfrom
andygrove wants to merge 2 commits intoapache:mainfrom
Conversation
Add a new configuration option to limit the number of batches buffered in memory before spilling during native shuffle. Setting a small value causes earlier spilling, reducing peak memory usage on executors at the cost of more disk I/O. The default of 0 preserves existing behavior (spill only when the memory pool is exhausted). Also fix a too-many-open-files issue where each partition held one spill file descriptor open for the lifetime of the task. The spill file is now closed after each spill event and reopened in append mode for the next, keeping FD usage proportional to active writes rather than total partitions.
Member
Author
|
This does not work in practice |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Which issue does this PR close?
Closes #.
Rationale for this change
When shuffle spills only when the memory pool is exhausted, peak memory usage on executors can be very high — especially with many concurrent tasks. Spilling earlier, before memory pressure is critical, reduces peak memory at the cost of slightly more disk I/O.
What changes are included in this PR?
spark.comet.exec.shuffle.maxBufferedBatchesconfig (default0= disabled). When set, the native shuffle repartitioner spills once it has buffered this many batches, before waiting for the memory pool to refuse an allocation.How are these changes tested?
Existing shuffle tests cover the spill path. The new config defaults to
0(disabled), so no existing behaviour changes without opt-in.