-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
More granular retry support #260
Comments
@kevinburke Could you aggregate the discussion in #245, #145 (comment) and other places into this bug description? Would be nice to present some code examples of how we want the API to look like. Also you make a good point about the HTTP methods having different retry behaviour. I'd like for this to be configurable, maybe some kind of |
Yeah, I was thinking a
|
Sounds good. Two comments:
|
Ok. I copied the "spec" into the issue description |
This would also be neat to write as a decorator, but this would not really be possible inside the library @retry(connect=3, read=3, methods=['GET'])
def make_request(method, url, headers, query_string, data):
""" make the request """ |
@kevinburke Cute thought. Maybe something for https://github.com/shazow/unstdlib.py. :P |
So what does our retry object look like so far? I imagine it something like... class Retry(object):
DEFAULT_METHOD_WHITELIST = set([
'HEAD', 'GET', 'PUT', 'DELETE', 'OPTIONS', 'TRACE'])
DEFAULT_STATUS_WHITELIST = [range(100,400), 408, 504]
def __init__(self,
total=None, redirect=5,
error_total=5, error_timeout=None, error_connect=None, error_read=None,
method_whitelist=self.DEFAULT_METHOD_WHITELIST,
status_whitelist=self.DEFAULT_STATUS_WHITELIST,
backoff_factor=0):
... The Retry constructor params should probably avoid mentioning "retry" and "retries" as it would be redundant. Also should be consistent whether we use singular or plural, suffix or prefix. I could be convinced in either direction. ( Presumably each retry count parameter would act as a maximum. That is, if I specify a And finally, the main things we want is to allow people to bring in their own retry logic easily. This means:
Perhaps the external API should be some method where we pass it an exception which represents the failure. This method can decide whether to raise a @piotr-dobrogost @alsroot @pasha-r Are we missing anything? |
How about naming it RetryPolicy and having the kwarg be retry_policy? Pros:
Cons:
On Friday, October 18, 2013, Andrey Petrov wrote:
Kevin Burke | 415-723-4116 | www.twilio.com |
What would you assume a
It's just in one place and purely a convenience shortcut to building a simple Alternatively, what happens if someone provides both a
This looks pretty sensible to me:
Or better yet... retry_policy = Retry(error_connect=3, backoff_factor=2, method_whitelist=['GET'])
...
http.request('GET', 'google.com', retries=retry_policy) Certainly no worse than I'm still leaning towards |
Ok fair enough :) |
A bit worried about the need for separate read and timeout retries, and a |
I didn't foresee myself using anything other than total, either. If that was the case, there wouldn't be a need for this :) See linked Github issues for examples of people wanting different things. Really I think we need a |
So... the semantics of total are a little tricky.
|
|
Great design guys! I'm loving it :) Couple of possible problem places though.
Possible solutions.
|
+1 on the
How do you feel about extending the Also another consideration: Does a redirect count as a retry? Probably should have a |
Also, what happens when you do a Either way, this scenario needs to be well-documented. :) |
Trying to summarize this...
|
If there are 2 connection errors, then If there is 1 connect error and 1 read error, everything is fine ( If there is 1 connect error and 2 read errors, then |
So with that then the semantics become:
|
Nar, Another example:
Specifying In the scenario where |
So... I am becoming less confident in the interface as we've laid it out. I'm just not sure it's as intuitive as it could be, or that it solves the use cases I have in mind. Will try to think some more about this. |
Can you share these use cases? I'm just keeping in mind the use cases of the feature requests urllib3 has had over the years (see mentioned issues in above replies). :) By default |
I had an idea to do something like https://gist.github.com/kevinburke/8565777, but reading that, it's not really intuitive either. I'll keep thinking about it. I just want to make sure the interface makes it pretty clear what's going to happen in every case. |
Can you expand on what you feel is unclear in the scenarios I described? |
\o/ |
I'm not sure what the interface for this would look like, or even at what level this would be implemented, but essentially, more function-level control over retries would be awesome. This is roughly the behavior we build in around a
requests.request
:Again, this might not be appropriate to build in at the library level (maybe we need better primitives for this internally) but adding all this logic around our http requests leads to pretty gnarly code.
Some kind of
Retries
object would be nice, especially because it mirrors theTimeout
interface and there are a lot of things you could specify if you wanted to run wild with it, say the following:connection_retries
, the number of times to retry a connection errorread_retries
or the number of times to retry in a situation where a connection was made to the server. a timeout, a closed connection, or an unacceptable HTTP error code, like 500retry_codes
, a list of integers representing HTTP status codes, could also have named constants forNON_200
,5XX_ERROR
, etc, would default to 500 rangemethod_whitelist
, a list of HTTP methods to retry, defaults to HEAD, GET, PUT, DELETE, OPTIONS, TRACE, per the specbackoff_factor
, some kind of multiplier to control how fast backoff occurs. Defaults to 0. Algorithm would be something likebackoff_factor * (2 ** (retry_attempt_number))
, so 1, 2, 4, 8...The text was updated successfully, but these errors were encountered: