Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding back-off cap of 5 seconds for 429 and skipping metadata calls for queries with partitioning strategy Restrictive #28764

Conversation

FabianMeiswinkel
Copy link
Member

@FabianMeiswinkel FabianMeiswinkel commented May 10, 2022

Description

This PR contains two small optimizations

  1. SDK + Spark - currently it is possible that the Cosmos DB backend will ask for a back-off time of significantly larger than 5 seconds (we have seen in logs up to 260 seconds) for 429/3200 (throttling because provisioned throughput is exceeded) if a replica has previously returned multiple consecutive 429s already. This PR changes the logic to enforce a n upper boundary of the back-off time of 5 seconds. This change has been vetted with Prashant (owning resource governance on the service)
  2. Spark - for queries with partitioning strategy restrictive we currently still retrieve the metadata (min LSN, max LSN, document count, total document size) - this information is used to calculate the number of Spark partitions to create for non-restrictive partitioning strategies. For queries with restrictive partitioning the metadata is technically not necessary - so this PR adds an optimization to skip the metadata I/O calls when not strictly necessary.

All SDK Contribution checklist:

  • The pull request does not introduce [breaking changes]
  • CHANGELOG is updated for new features, bug fixes or other significant changes.
  • I have read the contribution guidelines.

General Guidelines and Best Practices

  • Title of the pull request is clear and informative.
  • There are a small number of commits, each of which have an informative message. This means that previously merged commits do not appear in the history of the PR. For more information on cleaning up the commits in your PR, see this page.

Testing Guidelines

  • Pull request includes test coverage for the included changes.

@ghost ghost added the Cosmos label May 10, 2022
@FabianMeiswinkel FabianMeiswinkel changed the title Adding back-off cap of 5 seconds for 429 and skipping metadata calls for queries with partitioning strategy Restrictive DRAFT - DO NOT REVIEW YET!!!! - Adding back-off cap of 5 seconds for 429 and skipping metadata calls for queries with partitioning strategy Restrictive May 10, 2022
@azure-sdk
Copy link
Collaborator

API change check for com.azure:azure-cosmos

API changes are not detected in this pull request for com.azure:azure-cosmos

@FabianMeiswinkel FabianMeiswinkel changed the title DRAFT - DO NOT REVIEW YET!!!! - Adding back-off cap of 5 seconds for 429 and skipping metadata calls for queries with partitioning strategy Restrictive Adding back-off cap of 5 seconds for 429 and skipping metadata calls for queries with partitioning strategy Restrictive May 11, 2022
Copy link
Member

@xinlian12 xinlian12 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for the quick fix and solution 👍

Copy link
Member

@kushagraThapar kushagraThapar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@FabianMeiswinkel - I see we have not updated the other places where we use retry-after duration like, BatchExecUtils, and in BatchResponseParser #getRetryAfterDuration()
Any reason for not changing these?

@FabianMeiswinkel
Copy link
Member Author

@FabianMeiswinkel - I see we have not updated the other places where we use retry-after duration like, BatchExecUtils, and in BatchResponseParser #getRetryAfterDuration() Any reason for not changing these?

Good catch - I initially thought it isn't necessary to also cap the back-off time here, because when a BatchResponse overall result is 429 it gets ultimately converted into CosmosException (and we would have the cap there). But for Bulk it is possible that 429 is just one of the inner results - and there applying the cap would be needed. So I changed this to also always cap the BacthResult back-off.

@FabianMeiswinkel FabianMeiswinkel merged commit 0bc9e54 into Azure:main May 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants