Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Retry unauthorized HTTP request with a renewed JWT token #1586

Merged
merged 5 commits into from
Nov 16, 2022

Conversation

didier-wenzek
Copy link
Contributor

@didier-wenzek didier-wenzek commented Nov 14, 2022

Signed-off-by: Didier Wenzek didier.wenzek@free.fr

Proposed changes

Address the two root causes of Unauthorized request:i

  • Re-open the MQTT connection for each JWT token request.
  • When an HTTP request to Cumulocity is rejected as unauthorized, then renew the JWT token and retry.
    Do this only once, since there is no point to get yet another fresh token if rejected twice in a row.

For the first point, the alternative is to keep the MQTT connection open and to have a background task that consumes all the JWT token - requested by the process or not, and to cache the latest one. However, this alternative introduces more complexity and opening an MQTT connection before each HTTP request should not be a performance issue for the current use-cases.

Types of changes

  • Bugfix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Improvement (general improvements like code refactoring that doesn't explicitly fix a bug or add any new functionality)
  • Documentation Update (if none of the other choices apply)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)

Paste Link to the issue

See #1562

Checklist

  • I have read the CONTRIBUTING doc
  • I have signed the CLA (in all commits with git commit -s)
  • I ran cargo fmt as mentioned in CODING_GUIDELINES
  • I used cargo clippy as mentioned in CODING_GUIDELINES
  • I have added tests that prove my fix is effective or that my feature works
  • I have added necessary documentation (if appropriate)

Further comments

Copy link
Contributor

@albinsuresh albinsuresh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The try_get_internal_id function also needs to be updated with this retry logic.

@@ -621,6 +632,18 @@ mod tests {
let config_file = create_test_config_file_with_content(config_content)?;

// Mock endpoint for config upload event creation
// Fake an expired JWT token by rejecting the first config upload request
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't it better to have a dedicated test for this retry feature?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that I have been a bit lazy here ;-) Will see if I find some time for that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added with 73d114f

Copy link
Contributor

@reubenmiller reubenmiller left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good.

Moving forward we probably need a more generic retry handler to also handle failed requests due to network interruptions etc. But it is not for this ticket's scope as there would be additional aspects like backing off before retrying etc.

Copy link
Contributor

@reubenmiller reubenmiller left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Deleted duplicate post)

@didier-wenzek
Copy link
Contributor Author

The try_get_internal_id function also needs to be updated with this retry logic.

Done with 626d934

Copy link
Contributor

@albinsuresh albinsuresh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Neat trick using a fresh connection for every token request. The fix wasn't obvious to me on a first look, as I was searching for the logic that iterates over the message stream for the most recent token.

Copy link
Contributor

@reubenmiller reubenmiller left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One of the checks is failing due to a fixup! commit, so just so we following the correct process could you please squash the changes to remove the fixup! commit?

@didier-wenzek
Copy link
Contributor Author

Neat trick using a fresh connection for every token request. The fix wasn't obvious to me on a first look, as I was searching for the logic that iterates over the message stream for the most recent token.

I found no crystal clear way to get the most recent token. The main issue is that consuming messages from a subscription there is no way to know that we have read all the available messages.

@didier-wenzek
Copy link
Contributor Author

One of the checks is failing due to a fixup! commit, so just so we following the correct process could you please squash the changes to remove the fixup! commit?

Sure, I will git rebase -i --autosquash main.
We use to do that only once the PR approved, the goal being to avoid push --force during the review process.

@reubenmiller
Copy link
Contributor

One of the checks is failing due to a fixup! commit, so just so we following the correct process could you please squash the changes to remove the fixup! commit?

Sure, I will git rebase -i --autosquash main. We use to do that only once the PR approved, the goal being to avoid push --force during the review process.

Ok, thanks for the clarification. Approved

Copy link
Contributor

@reubenmiller reubenmiller left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good

Signed-off-by: Didier Wenzek <didier.wenzek@free.fr>
Signed-off-by: Didier Wenzek <didier.wenzek@free.fr>
Signed-off-by: Didier Wenzek <didier.wenzek@free.fr>
Signed-off-by: Didier Wenzek <didier.wenzek@free.fr>
Signed-off-by: Didier Wenzek <didier.wenzek@free.fr>
@albinsuresh
Copy link
Contributor

Neat trick using a fresh connection for every token request. The fix wasn't obvious to me on a first look, as I was searching for the logic that iterates over the message stream for the most recent token.

I found no crystal clear way to get the most recent token. The main issue is that consuming messages from a subscription there is no way to know that we have read all the available messages.

This may have been an option: https://docs.rs/futures/0.3.25/futures/future/trait.FutureExt.html#method.now_or_never by repeatedly calling it until it returns None.

@didier-wenzek
Copy link
Contributor Author

Neat trick using a fresh connection for every token request. The fix wasn't obvious to me on a first look, as I was searching for the logic that iterates over the message stream for the most recent token.

I found no crystal clear way to get the most recent token. The main issue is that consuming messages from a subscription there is no way to know that we have read all the available messages.

This may have been an option: https://docs.rs/futures/0.3.25/futures/future/trait.FutureExt.html#method.now_or_never by repeatedly calling it until it returns None.

Thanks for the hint. Indeed this might be useful. However, in that specific case, complexity arises due to a race condition between this call and the reception of a token. One might get None not because all the published token have been read, but because the latest taken as not been received yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants