Adds option to configure max batch size in readManyByPartitionKeys#48930
Merged
FabianMeiswinkel merged 5 commits intomainfrom Apr 24, 2026
Merged
Adds option to configure max batch size in readManyByPartitionKeys#48930FabianMeiswinkel merged 5 commits intomainfrom
FabianMeiswinkel merged 5 commits intomainfrom
Conversation
Member
Author
|
@sdkReviewAgent |
Contributor
There was a problem hiding this comment.
Pull request overview
Adds a per-request option to control the maximum batch size used by readManyByPartitionKeys, and wires it through the Cosmos Java SDK internals and Spark connector configuration.
Changes:
- Add
maxBatchSizegetter/setter toCosmosReadManyByPartitionKeysRequestOptionsand bridge accessor plumbing. - Plumb
maxBatchSizethroughCosmosAsyncContainer -> AsyncDocumentClient -> RxDocumentClientImpland use it when building batches. - Add Spark connector config key parsing + unit tests, and apply the setting when constructing request options.
Reviewed changes
Copilot reviewed 10 out of 10 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| sdk/cosmos/azure-cosmos/src/main/java/com/azure/cosmos/models/CosmosReadManyByPartitionKeysRequestOptions.java | Introduces public per-request maxBatchSize API and exposes it via bridge accessor. |
| sdk/cosmos/azure-cosmos/src/main/java/com/azure/cosmos/implementation/RxDocumentClientImpl.java | Threads maxBatchSize into the internal execution path and uses it for batch construction. |
| sdk/cosmos/azure-cosmos/src/main/java/com/azure/cosmos/implementation/ImplementationBridgeHelpers.java | Extends request-options accessor interface to surface maxBatchSize. |
| sdk/cosmos/azure-cosmos/src/main/java/com/azure/cosmos/implementation/CosmosReadManyByPartitionKeysRequestOptionsImpl.java | Stores/clones the new maxBatchSize option in the internal options implementation. |
| sdk/cosmos/azure-cosmos/src/main/java/com/azure/cosmos/implementation/AsyncDocumentClient.java | Updates internal client interface to accept maxBatchSize. |
| sdk/cosmos/azure-cosmos/src/main/java/com/azure/cosmos/CosmosAsyncContainer.java | Resolves effective maxBatchSize (per-request override vs global default) and passes it down. |
| sdk/cosmos/azure-cosmos-spark_3/src/test/scala/com/azure/cosmos/spark/CosmosConfigSpec.scala | Adds Spark config parsing tests for readManyByPk.maxBatchSize (and updated expectations for prefetch). |
| sdk/cosmos/azure-cosmos-spark_3/src/main/scala/com/azure/cosmos/spark/ItemsPartitionReaderWithReadManyByPartitionKey.scala | Applies Spark config overrides to request options via foreach. |
| sdk/cosmos/azure-cosmos-spark_3/src/main/scala/com/azure/cosmos/spark/CosmosConfig.scala | Adds new Spark config key + parsing; changes prefetch config default handling to defer to SDK when unset. |
| sdk/cosmos/azure-cosmos-spark_3/dev/README.md | Adds build command for an additional Spark 4.1 module. |
Comments suppressed due to low confidence (1)
sdk/cosmos/azure-cosmos/src/main/java/com/azure/cosmos/implementation/RxDocumentClientImpl.java:4384
maxBatchSizeis used as the step size for the batching loop downstream; if a caller passes 0 (or a negative value), the loopfor (int i = 0; i < allPks.size(); i += maxPksPerPartitionQuery)will never advance and can hang. Add an argument validation similar tomaxConcurrentBatchPrefetch(>= 1) and fail fast with a clear message.
checkNotNull(partitionKeys, "Argument 'partitionKeys' must not be null.");
checkArgument(!partitionKeys.isEmpty(), "Argument 'partitionKeys' must not be empty.");
checkArgument(maxConcurrentBatchPrefetch >= 1,
"Argument 'maxConcurrentBatchPrefetch' must be greater than or equal to 1.");
…CosmosReadManyByPartitionKeysRequestOptions.java Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Member
Author
|
/azp run java - cosmos - spark |
|
Azure Pipelines successfully started running 1 pipeline(s). |
xinlian12
reviewed
Apr 24, 2026
Member
|
✅ Review complete (32:04) Posted 3 inline comment(s). Steps: ✓ context, correctness, cross-sdk, design, history, past-prs, synthesis, test-coverage |
…/Azure/azure-sdk-for-java into users/fabianm/configMaxBatchsize
Member
Author
|
/azp run java - cosmos - spark |
|
Azure Pipelines successfully started running 1 pipeline(s). |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
Please add an informative description that covers that changes made by the pull request and link all relevant issues.
If an SDK is being regenerated based on a new swagger spec, a link to the pull request containing these swagger spec changes has been included above.
All SDK Contribution checklist:
General Guidelines and Best Practices
Testing Guidelines