Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Retry #3623

Merged
merged 5 commits into from
Jun 20, 2024
Merged

Retry #3623

merged 5 commits into from
Jun 20, 2024

Conversation

rrousselGit
Copy link
Owner

Related Issues

fixes #1037

Checklist

Before you create this PR confirm that it meets all requirements listed below by checking the relevant checkboxes ([x]).

  • I have updated the CHANGELOG.md of the relevant packages.
    Changelog files must be edited under the form:

    ## Unreleased fix/major/minor
    
    - Description of your change. (thanks to @yourGithubId)
  • If this contains new features or behavior changes,
    I have updated the documentation to match those changes.

Copy link

changeset-bot bot commented Jun 19, 2024

⚠️ No Changeset found

Latest commit: be9da02

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

@rrousselGit rrousselGit merged commit 3d9b748 into dev Jun 20, 2024
18 of 21 checks passed
@rrousselGit rrousselGit deleted the retry branch June 20, 2024 10:07
@snapsl
Copy link

snapsl commented Jun 27, 2024

@rrousselGit is this feature intended for Riverpod 3.0 or a later release?
I like the feature but I have some concerns about setting a default retry timer that can endlessly retrigger.

This approach could potentially introduce various bugs

  • http libraries that have custom retry and timeout logic already in place
  • unnecessary requests / rebuilds due to unresolvable error
  • strange error UI behavior that would automatically resolve without user interaction
  • performance degradation due to multiple providers fail simultaneously and trigger retries
  • should dependent provider retry while the parent provider fail (unnecessary)
  • ...

Considering these factors, handling all these cases during migration to Riverpod 3.0 could become unnecessarily complicated.
Is there already a strategy for handling this, or am I unnecessarily worried here.

@rrousselGit
Copy link
Owner Author

Considering these factors, handling all these cases during migration to Riverpod 3.0 could become unnecessarily complicated.

Not really. Just disable retries globally on ProviderScope:

ProviderScope(retry: (count, err) => null);

It takes one line to migrate :)

should dependent provider retry while the parent provider fail (unnecessary)

That one is an interesting problem. But Riverpod should be able to detect that the exception is rethrown from a dependency, and silence the retry in this case.
I'll raise an issue.

strange error UI behavior that would automatically resolve without user interaction

It's usually better in terms of UI if the problem fixes itself.
The key is, you should usually use a progress indicator during the retry to show that it's retrying. A linear indicator under the appbar is usually the go-to.

unnecessary requests / rebuilds due to unresolvable error

If your app knows what is a solvable error vs what isn't, you can improve the default.
For example, you could enable the retry only on http 404 exceptions and a few other key scenarios.

performance degradation due to multiple providers fail simultaneously and trigger retries

Retries are slow to trigger, and become even slower over time. It's unlikely to have a significant impact

@snapsl
Copy link

snapsl commented Jun 27, 2024

It takes one line to migrate :)

This is great! I skipped the new argument on the ProviderScope.
This is probably the go to for a quick and easy upgrade. Looking forward to the release :)

It's usually better in terms of UI if the problem fixes itself.
The key is, you should usually use a progress indicator during the retry to show that it's retrying. A linear indicator under the appbar is usually the go-to.

Exemplary, in the async case the AsyncValue would be AsyncError during a retry, if I understand it correctly. In the default case, we would then display an error screen with a linear ProgressIndicator that would be visible for 200ms - forever? (200ms would be very short if we display an error text)

Is there a way how we can detect if a provider is retrying (I probably missed that too :D). This would be particularly useful if only a fixed number of retries is defined.

Also, how does this concept work for a sync provider?

Sorry for all the questions.

@snapsl
Copy link

snapsl commented Jul 9, 2024

Just checkin in briefly. Are there plans for an isRetrying flag?

@rrousselGit
Copy link
Owner Author

That's isRefreshing.

@snapsl
Copy link

snapsl commented Jul 9, 2024

I'm not sure if isRefreshing is sufficient. This would only be true during invalidateSelf (unless I got that wrong).
The idea behind isRetrying is to be true while _retryCount is not exhausted, so that the final error message can be displayed if no more retries are performed.

@rrousselGit
Copy link
Owner Author

This would only be true during invalidateSelf

It will be true during retries. As retries are nothing more than an invalidateSelf on error.

@snapsl
Copy link

snapsl commented Jul 9, 2024

Thank you for the clarification! 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants