-
-
Notifications
You must be signed in to change notification settings - Fork 2.4k
Retry callbacks passing errors. #370
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
f809d3d
306f6b7
bde69cb
63e2346
df3f2b0
2d94f01
5abc2c8
4c9ff19
668dcba
9f225d3
b35deff
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -10,24 +10,76 @@ import ( | |
| "github.com/go-kit/kit/endpoint" | ||
| ) | ||
|
|
||
| // RetryError is an error wrapper that is used by the retry mechanism. All | ||
| // errors returned by the retry mechanism via its endpoint will be RetryErrors. | ||
| type RetryError struct { | ||
| RawErrors []error // all errors encountered from endpoints directly | ||
| Final error // the final, terminating error | ||
| } | ||
|
|
||
| func (e RetryError) Error() string { | ||
| var suffix string | ||
| if len(e.RawErrors) > 1 { | ||
| a := make([]string, len(e.RawErrors)-1) | ||
| for i := 0; i < len(e.RawErrors)-1; i++ { // last one is Final | ||
| a[i] = e.RawErrors[i].Error() | ||
| } | ||
| suffix = fmt.Sprintf(" (previously: %s)", strings.Join(a, "; ")) | ||
| } | ||
| return fmt.Sprintf("%v%s", e.Final, suffix) | ||
| } | ||
|
|
||
| // Callback is a function that is given the current attempt count and the error | ||
| // received from the underlying endpoint. It should return whether the Retry | ||
| // function should continue trying to get a working endpoint, and a custom error | ||
| // if desired. The error message may be nil, but a true/false is always | ||
| // expected. In all cases, if the replacement error is supplied, the received | ||
| // error will be replaced in the calling context. | ||
| type Callback func(n int, received error) (keepTrying bool, replacement error) | ||
|
|
||
| // Retry wraps a service load balancer and returns an endpoint oriented load | ||
| // balancer for the specified service method. | ||
| // Requests to the endpoint will be automatically load balanced via the load | ||
| // balancer. Requests that return errors will be retried until they succeed, | ||
| // up to max times, or until the timeout is elapsed, whichever comes first. | ||
| // balancer for the specified service method. Requests to the endpoint will be | ||
| // automatically load balanced via the load balancer. Requests that return | ||
| // errors will be retried until they succeed, up to max times, or until the | ||
| // timeout is elapsed, whichever comes first. | ||
| func Retry(max int, timeout time.Duration, b Balancer) endpoint.Endpoint { | ||
| return RetryWithCallback(timeout, b, maxRetries(max)) | ||
| } | ||
|
|
||
| func maxRetries(max int) Callback { | ||
| return func(n int, err error) (keepTrying bool, replacement error) { | ||
| return n < max, nil | ||
| } | ||
| } | ||
|
|
||
| func alwaysRetry(int, error) (keepTrying bool, replacement error) { | ||
| return true, nil | ||
| } | ||
|
|
||
| // RetryWithCallback wraps a service load balancer and returns an endpoint | ||
| // oriented load balancer for the specified service method. Requests to the | ||
| // endpoint will be automatically load balanced via the load balancer. Requests | ||
| // that return errors will be retried until they succeed, up to max times, until | ||
| // the callback returns false, or until the timeout is elapsed, whichever comes | ||
| // first. | ||
| func RetryWithCallback(timeout time.Duration, b Balancer, cb Callback) endpoint.Endpoint { | ||
| if cb == nil { | ||
| cb = alwaysRetry | ||
| } | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We can be safer, here. If there's a nil callback, we can substitute a no-op or always-pass implementation instead.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Substituting a no-op now, with a test. |
||
| if b == nil { | ||
| panic("nil Balancer") | ||
| } | ||
|
|
||
| return func(ctx context.Context, request interface{}) (response interface{}, err error) { | ||
| var ( | ||
| newctx, cancel = context.WithTimeout(ctx, timeout) | ||
| responses = make(chan interface{}, 1) | ||
| errs = make(chan error, 1) | ||
| a = []string{} | ||
| final RetryError | ||
| ) | ||
| defer cancel() | ||
| for i := 1; i <= max; i++ { | ||
|
|
||
| for i := 1; ; i++ { | ||
| go func() { | ||
| e, err := b.Endpoint() | ||
| if err != nil { | ||
|
|
@@ -45,13 +97,22 @@ func Retry(max int, timeout time.Duration, b Balancer) endpoint.Endpoint { | |
| select { | ||
| case <-newctx.Done(): | ||
| return nil, newctx.Err() | ||
|
|
||
| case response := <-responses: | ||
| return response, nil | ||
|
|
||
| case err := <-errs: | ||
| a = append(a, err.Error()) | ||
| final.RawErrors = append(final.RawErrors, err) | ||
| keepTrying, replacement := cb(i, err) | ||
| if replacement != nil { | ||
| err = replacement | ||
| } | ||
| if !keepTrying { | ||
| final.Final = err | ||
| return nil, final | ||
| } | ||
| continue | ||
| } | ||
| } | ||
| return nil, fmt.Errorf("retry attempts exceeded (%s)", strings.Join(a, "; ")) | ||
| } | ||
| } | ||
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I haven't given this enough thought to give you really actionable advice yet, but I'm not happy with the way this is currently structured. Namely I'd want the "retry up to max times, or until timeout, whichever comes first" logic to be encoded in the callback passed to the function, and not hard-coded into the function itself. I'm not sure how it would look, precisely. Can you see if that's possible?
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've split the max retry logic out into the callback. There's a helper function for generating a max retries callback if that's all you want to do, and
Retry()passes one of those on toRetryWithCallback()now.I'm not sure if or how to tackle handling timeouts in the callback. I think it's reasonable to keep those two jobs separate. To me, I'm happy with the timeout in the
selectinsideRetryWithCallback(). What do you reckon @peterbourgon?