Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Retry after delay operator #164

Closed
anechaev opened this issue Dec 23, 2019 · 7 comments
Closed

Retry after delay operator #164

anechaev opened this issue Dec 23, 2019 · 7 comments
Labels
enhancement New feature or request question Further information is requested

Comments

@anechaev
Copy link

Is there a clean way to write Retry after delay operator that would wait for some time and retry on error.

An alternative I can imagine is to wrap throwing an error in some sort of DispatchQueue.asyncAfter and combine it with retry operator after.

@anechaev anechaev added the enhancement New feature or request label Dec 23, 2019
@heckj heckj added the question Further information is requested label Dec 23, 2019
@heckj
Copy link
Owner

heckj commented Dec 23, 2019

it’s a good question, but without a super amazing answer as yet. for the simpler use cases, i’ve used the delay() operator in coordination with the retry() operator to get the basics in place. you can see the explicit example of this at https://heckj.github.io/swiftui-notes/#patterns-retry

in the example, i’m using a random (shortish) backoff. You could just as easily make that a static/consistent delay.

it’s not set up as a single operator within Combine however.

the immediate question I often see is a common desire: how do i get an increasing backoff within a pipeline? and that’s where i start to fall down on anything useful.

does this help highlight a solution for you?

@anechaev
Copy link
Author

anechaev commented Dec 23, 2019

https://heckj.github.io/swiftui-notes/#patterns-retry inserts a delay unconditionally. How would you add a delay only if upstream publisher raises an error? It doesn't need to be a single operator, chaining delay-repeat is okay too.
That's a good point about exponentially increasing delay interval, it'd be nice to have it baked in the logic.

@heckj
Copy link
Owner

heckj commented Dec 23, 2019

yep, it’s a downside of that implementation. if you want to handle the error and retry within the pipeline of operators, that’s the best i’ve been able to work out so far.

another option is breaking out the “how you handle it” until the end of the pipeline, so invoking the retry not with the operator but with logic in a sink() subscriber where you’re specifically doing a delay and then invoking another run at whatever you wanted to try. this exposes the retry logic to your higher level code (and makes you track it manually), which also may not be what you’d like to see.

@anechaev
Copy link
Author

I guess, this code will work for me:

 let resultPublisher = upstreamPublisher.catch { error -> AnyPublisher<String, Error> in
      return Publishers.Delay(
        upstream: upstreamPublisher,
        interval: 3,
        tolerance: 1,
        scheduler: DispatchQueue.global()).eraseToAnyPublisher()
    }.retry(2)

@heckj
Copy link
Owner

heckj commented Dec 24, 2019

That will indeed delay only on error, but a side effect of the returning a the delay from a catch and then doing the .retry() appears to be doubling up the upstream demand requests - so instead of 3 requests being made with a retry(2), you're getting six.

It's quite possible I incorrectly set up the test to validate this, so check it out at your convenience.

The output is clearly only invoking the delay 3 times, but the underlying async call is getting double-requested on every error-catch-retry:

[12:25:59.5170]  * starting async call (waiting 1 seconds before returning) 
[12:26:00.6100]  * completing async call 
[12:26:00.6130] delaying on error for ~3 seconds 
[12:26:00.6130]  * starting async call (waiting 1 seconds before returning) 
[12:26:01.7090]  * completing async call 
[12:26:04.7940]  * starting async call (waiting 1 seconds before returning) 
[12:26:05.8100]  * completing async call 
[12:26:05.8100] delaying on error for ~3 seconds 
[12:26:05.8110]  * starting async call (waiting 3 seconds before returning) 
[12:26:08.9090]  * completing async call 
[12:26:11.9560]  * starting async call (waiting 2 seconds before returning) 
[12:26:14.0460]  * completing async call 
[12:26:14.0470] delaying on error for ~3 seconds 
[12:26:14.0470]  * starting async call (waiting 3 seconds before returning) 
[12:26:17.1420]  * completing async call 
[12:26:20.2270] .sink() received the completion:  failure(UsingCombineTests.RetryPublisherTests.TestFailureCondition.invalidServerResponse)

example/test in #165

@heckj
Copy link
Owner

heckj commented Dec 25, 2019

Hey @anechaev - I found a slightly improved solution, and worked through the duplicate requests thing.

The improved stanza moves the retry() to be inline with the Publishers.Delay() that you're triggering inside the catch. That puts the retry() only on the delay requests, where outside of it it's basically sending "retry" effects up both the original and Delay chains to duplicate all your async API requests.

The slightly better stanza is:

        let resultPublisher = upstreamPublisher.catch { error -> AnyPublisher<String, Error> in
            return Publishers.Delay(upstream: upstreamPublisher,
                                    interval: 3,
                                    tolerance: 1,
                                    scheduler: DispatchQueue.global())
                // moving retry into this block reduces the number of duplicate requests
                // In effect, there's the original request, and the `retry(2)` here will operate
                // two additional retries on the otherwise one-shot publisher that is initiated with
                // the `Publishers.Delay()` just above. Just starting this publisher with delay makes
                // an additional request, so the total number of requests ends up being 4 (assuming all
                // fail). However, no delay is introduced in this sequence if the original request
                // is successful.
                .retry(2)
                .eraseToAnyPublisher()
        }

@heckj heckj closed this as completed Dec 25, 2019
@anechaev
Copy link
Author

You're right, moving retry inside the catch block works correctly. However, I'm not 100% sure if accessing upstreamPublisher in the downstream operator is a good practice - it feels like we're breaking the pipeline by doing that.

Maybe moving this block to the extension will look better:

extension Publisher {
  func retryWithDelay<T, E>()
    -> Publishers.Catch<Self, AnyPublisher<T, E>> where T == Self.Output, E == Self.Failure
  {
    return self.catch { error -> AnyPublisher<T, E> in
      return Publishers.Delay(
        upstream: self,
        interval: 3,
        tolerance: 1,
        scheduler: DispatchQueue.global()).retry(2).eraseToAnyPublisher()
    }
  }
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants