-
Notifications
You must be signed in to change notification settings - Fork 332
Add timeout and retry logic to Azure token fetch #3113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add timeout and retry logic to Azure token fetch #3113
Conversation
Previously, the getAccessToken method used an unbounded blocking call which could hang indefinitely if Azure's token endpoint was slow or unresponsive. This change adds defensive timeout and retry mechanisms: - 15-second timeout per individual token request attempt - Exponential backoff retry (3 attempts: 2s, 4s, 8s) with 50% jitter to prevent thundering herd during mass failures - 90-second overall timeout as a safety net - Specific retry logic for known transient Azure AD errors (AADSTS50058, AADSTS50078, AADSTS700084, 503, 429) This makes the system more resilient to transient Azure service issues and prevents indefinite blocking that could cascade to request timeouts or service degradation.
dimas-b
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice improvement! Thanks for your contribution, @fivetran-rahulprakash !
| defaultAzureCredential | ||
| .getToken(new TokenRequestContext().addScopes(scope).setTenantId(tenantId)) | ||
| .blockOptional() | ||
| .timeout(Duration.ofSeconds(15)) // Per-attempt timeout |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have RealmConfig here, could you add a general setting in FeatureConfiguration for this timeout. I suppose it could applicable to other integrations too (but, of course, in this PR we can concentrate on Azure only)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added four new configuration constants in FeatureConfiguration:
CLOUD_API_TIMEOUT_SECONDS (default: 15) - Per-attempt timeout
CLOUD_API_RETRY_COUNT (default: 3) - Number of retry attempts
CLOUD_API_RETRY_DELAY_SECONDS (default: 2) - Initial delay for exponential backoff
CLOUD_API_RETRY_JITTER_MILLIS (default: 500) - Maximum jitter to prevent thundering herd
These use generic naming (CLOUD_API_*) rather than Azure-specific names, making them reusable for future implementations.
| tenantId, | ||
| error.getMessage())) | ||
| .retryWhen( | ||
| Retry.backoff(3, Duration.ofSeconds(2)) // 3 retries: 2s, 4s, 8s |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having backoff settings configurable could also be helpful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed
| .jitter(0.5) // ±50% jitter to prevent thundering herd | ||
| .filter( | ||
| throwable -> | ||
| throwable instanceof java.util.concurrent.TimeoutException |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TimeoutException is already handled by isRetriableAzureException, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you're absolutely right! I missed that. Removed the duplicate check now. Thank you for catching that!
| "Azure token fetch exhausted after %d attempts for tenant %s", | ||
| retrySignal.totalRetries(), tenantId), | ||
| retrySignal.failure()))) | ||
| .blockOptional(Duration.ofSeconds(90)) // Maximum total wait time |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need this on top of .timeout() (line 337)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point! I initially added the overall timeout as a safety net to ensure we never block indefinitely, but you're right it's unnecessary. The combination of per-attempt timeout and .retryWhen() with exponential backoff already provides sufficient protection. Removed it now. Thanks for the feedback!
- Add 4 generic cloud provider API configuration constants: CLOUD_API_TIMEOUT_SECONDS (default: 15) CLOUD_API_RETRY_COUNT (default: 3) CLOUD_API_RETRY_DELAY_SECONDS (default: 2) CLOUD_API_RETRY_JITTER_MILLIS (default: 500) - Update AzureCredentialsStorageIntegration to use configurable values - Remove hardcoded 90s overall timeout (per-attempt timeout + retries sufficient) - Improve error logging and retry logic documentation - Generic naming allows future reuse by AWS/GCP storage integrations Addresses review comments from dimas-b on PR 3113
|
Thank you @dimas-b for the thorough review and excellent suggestions! I've addressed all your comments |
| .defaultValue(2) | ||
| .buildFeatureConfiguration(); | ||
|
|
||
| public static final FeatureConfiguration<Integer> CLOUD_API_RETRY_JITTER_MILLIS = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I chose to use milliseconds instead of a 0-1 jitter factor for a few reasons:
User clarity - It's more intuitive for operators to specify "500 milliseconds of jitter" rather than understanding what "0.5 jitter factor" means (50% of the retry delay)
Concrete vs relative - Millis gives direct control over the maximum random delay added, while a factor requires understanding how it interacts with the exponential backoff delays
Consistency - All other time-based configs use concrete units (seconds/millis) rather than abstract factors
Predictability - With millis, the max jitter is always clear regardless of retry delay values
The small conversion cost (jitterMillis / 1000.0) is negligible compared to the benefits of making the config more operator friendly. Happy to change to 0-1 factor if you prefer that approach though!
dimas-b
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM overall, just some minor comments about the new config.
| private AccessToken getAccessToken(RealmConfig realmConfig, String tenantId) { | ||
| int timeoutSeconds = realmConfig.getConfig(CLOUD_API_TIMEOUT_SECONDS); | ||
| int retryCount = realmConfig.getConfig(CLOUD_API_RETRY_COUNT); | ||
| int initialDelaySeconds = realmConfig.getConfig(CLOUD_API_RETRY_DELAY_SECONDS); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would you mind using millis for initialDelaySeconds... in some cases even 1 sec may be too long. Let's delegate what the min delay should be to the admin user who configures it.
Same for timeoutSeconds... I hope Azure SDK supports millis.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the suggestion!
Changed both to milliseconds:
CLOUD_API_TIMEOUT_MILLIS (default: 15000ms)
CLOUD_API_RETRY_DELAY_MILLIS (default: 2000ms)
- Rename CLOUD_API_TIMEOUT_SECONDS to CLOUD_API_TIMEOUT_MILLIS (default: 15000ms) - Rename CLOUD_API_RETRY_DELAY_SECONDS to CLOUD_API_RETRY_DELAY_MILLIS (default: 2000ms) - Update AzureCredentialsStorageIntegration to use Duration.ofMillis() - Allows admins to configure sub-second timeouts for finer control Addresses review feedback from dimas-b
dimas-b
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, one more minor comment from my side... Otherwise LGTM 👍
| int retryCount = realmConfig.getConfig(CLOUD_API_RETRY_COUNT); | ||
| int initialDelayMillis = realmConfig.getConfig(CLOUD_API_RETRY_DELAY_MILLIS); | ||
| int jitterMillis = realmConfig.getConfig(CLOUD_API_RETRY_JITTER_MILLIS); | ||
| double jitter = jitterMillis / 1000.0; // Convert millis to fraction for jitter factor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure, I fully understand this logic... per javadoc of reactor.util.retry.RetryBackoffSpec.jitter() the factor applies to the "computed delay", which may not be 1000 ms 🤔 How can the user reason about what the CLOUD_API_RETRY_JITTER_MILLIS value of 750 (for example) means?
Would it not be simpler to use the 0.0-1.0 factor value in the config?
Problem
The
getAccessToken()method inAzureCredentialsStorageIntegrationused an unbounded blocking call which could hang indefinitely if Azure's token endpoint was slow or unresponsive. This could lead to:Solution
This PR adds defensive timeout and retry mechanisms using Project Reactor's built-in capabilities:
AADSTS50058- Token endpoint timeoutAADSTS50078- Service temporarily unavailableAADSTS700084- Token refresh required503- Service unavailable429- Too many requestsTesting
Benefits
Checklist
CHANGELOG.md(awaiting maintainer guidance on format)