Support user-defined retrying strategies #2
Labels
breaking change
Introduction of an incompatible API change
enhancement
New feature or request therefor
under consideration
Dev has not yet decided whether or how to implement
Possible API:
The
Client
constructor is passed an instance of aRetrier
protocol instead of aRetryConfig
.When the client gets an error (including an
HTTPError
fromraise_for_status()
), it passes the error to the retrier'shandle()
(oron_error()
?) method along with a dataclass with attributes for retry number and the time at which the client started attemping to make the request before retries.The handle method either returns a
float
/int
— causing the client to sleep that many seconds and then retry — or else returnsNone
— indicating that no retrying should occur, in which case the client reraises the original error.The dataclass should also have a
dict
attribute for the retrier to store arbitrary data inAlso give the dataclass
method
andurl
attributesIdea: Change
Retrier
to an ABC and give it anon_success()
method that defaults topass
?The current retrying strategy should be implemented by a default
Retrier
class (namedGitHubRetrier
?).Add a
NullRetrier
implementation that never retriesIdeally, it should be easy for the user to take the default retrier and add in a custom retry condition to consider in addition to the default conditions, possibly with calculating of custom sleep times when this condition occurs.
It should be easy for a custom retry condition to indicate that the sleep time should be based on exponential backoff without the condition having to do the backoff calculation itself.
backoff(attempts: int) -> float
method for this purpose?If a custom retry condition returns a sleep time less than the current exponential backoff time, the latter is used as the sleep time instead, just like for the current strategy
idea: Define the default retrier or another retrier as taking a collection of
Callable[[RequestError, RetryState], float | None]
values, and then the handle method calls each one and uses the maximum result (possibly clamped to not exceed the stop time)Once this is done, use
ghreq
intinuous
.The text was updated successfully, but these errors were encountered: