Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add retries #778

Closed
wants to merge 5 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
87 changes: 87 additions & 0 deletions docs/advanced.md
Original file line number Diff line number Diff line change
Expand Up @@ -466,3 +466,90 @@ If you do need to make HTTPS connections to a local server, for example to test
>>> r
Response <200 OK>
```

## Retries

Communicating with a peer over a network is by essence subject to errors. HTTPX provides built-in retry functionality to increase the resilience to unexpected issues such as network faults or connection issues.

The default behavior is to retry at most 3 times on connection and network errors before marking the request as failed and bubbling up any exceptions. The delay between retries is increased each time to prevent overloading the requested server.

### Setting and disabling retries

You can set the retry behavior on a client instance, which results in the given behavior being used for all requests made with this client:

```python
client = httpx.Client() # Retry at most 3 times on connection failures.
client = httpx.Client(retries=5) # Retry at most 5 times on connection failures.
client = httpx.Client(retries=0) # Disable retries.
florimondmanca marked this conversation as resolved.
Show resolved Hide resolved
```

### Fine-tuning the retries configuration

When instantiating a client, the `retries` argument may be one of the following...

* An integer, representing the maximum number connection failures to retry on. Use `0` to disable retries entirely.

```python
client = httpx.Client(retries=5)
```

* An `httpx.Retries()` instance. It accepts the number of connection failures to retry on as a positional argument. The `backoff_factor` keyword argument that specifies how fast the time to wait before issuing a retry request should be increased. By default this is `0.2`, which corresponds to issuing a new request after `(0s, 0.2s, 0.4s, 0.8s, ...)`. (Note that a lot of errors are immediately resolved after retrying, so HTTPX will always issue the initial retry right away.)

```python
# Retry at most 5 times on connection failures,
# and issue new requests after `(0s, 0.5s, 1s, 2s, 4s, ...)`
retries = httpx.Retries(5, backoff_factor=0.5)
client = httpx.Client(retries=retries)
```

### Advanced retries customization

The first argument to `httpx.Retries()` can also be a subclass of `httpx.RetryLimits`. This is useful if you want to replace or extend the default behavior of retrying on connection failures.

The `httpx.RetryLimits` subclass should implement the `.retry_flow()` method, `yield` any request to be made, and prepare for the following situations...

* (A) The request resulted in an `httpx.HTTPError`. If it shouldn't be retried on, `raise` the error as-is. If it should be retried on, you should make any necessary modifications to the request, and continue yielding. If you've exceeded a maximum number of retries, wrap the error in `httpx.TooManyRetries()`, and raise the result.
* (B) The request went through and resulted in the client sending back a `response`. If it shouldn't be retried on, `return` to terminate the retry flow. If it should be retried on (e.g. because it is an error response), you should make any necessary modifications to the request, and continue yielding. If you've exceeded a maximum number of retries, wrap the response in `httpx.TooManyRetries()`, and raise the result.

As an example, here's how you could implement a custom retry limiting policy that retries on certain status codes:

```python
import httpx

class RetryOnStatusCodes(httpx.RetryLimits):
def __init__(self, limit, status_codes):
self.limit = limit
self.status_codes = status_codes

def retry_flow(self, request):
retries_left = self.limit

while True:
response = yield request

if response.status_code not in self.status_codes:
return

if retries_left == 0:
try:
response.raise_for_status()
except httpx.HTTPError as exc:
raise httpx.TooManyRetries(exc, response=response)
else:
raise httpx.TooManyRetries(response=response)

retries_left -= 1
```

To use a custom policy:

* Explicitly pass the number of times to retry on connection failures as a first positional argument to `httpx.Retries()`. (Use `0` to not retry on these failures.)
* Pass the custom policy as a second positional argument.

For example...

```python
# Retry at most 3 times on connection failures, and at most three times
# on '429 Too Many Requests', '502 Bad Gateway', or '503 Service Unavailable'.
retries = httpx.Retries(3, RetryOnStatusCodes(3, status_codes={429, 502, 503}))
```
7 changes: 6 additions & 1 deletion httpx/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
from .api import delete, get, head, options, patch, post, put, request, stream
from .auth import Auth, BasicAuth, DigestAuth
from .client import AsyncClient, Client
from .config import PoolLimits, Proxy, Timeout
from .config import PoolLimits, Proxy, Retries, Timeout
from .dispatch.asgi import ASGIDispatch
from .dispatch.wsgi import WSGIDispatch
from .exceptions import (
Expand All @@ -25,9 +25,11 @@
StreamConsumed,
TimeoutException,
TooManyRedirects,
TooManyRetries,
WriteTimeout,
)
from .models import URL, Cookies, Headers, QueryParams, Request, Response
from .retries import RetryLimits
from .status_codes import StatusCode, codes

__all__ = [
Expand All @@ -54,6 +56,9 @@
"PoolLimits",
"Proxy",
"Timeout",
"Retries",
"RetryLimits",
"TooManyRetries",
"ConnectTimeout",
"CookieConflict",
"ConnectionClosed",
Expand Down
3 changes: 3 additions & 0 deletions httpx/backends/asyncio.py
Original file line number Diff line number Diff line change
Expand Up @@ -225,6 +225,9 @@ async def open_uds_stream(

return SocketStream(stream_reader=stream_reader, stream_writer=stream_writer)

async def sleep(self, seconds: float) -> None:
await asyncio.sleep(seconds)

def time(self) -> float:
loop = asyncio.get_event_loop()
return loop.time()
Expand Down
3 changes: 3 additions & 0 deletions httpx/backends/auto.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,9 @@ async def open_uds_stream(
) -> BaseSocketStream:
return await self.backend.open_uds_stream(path, hostname, ssl_context, timeout)

async def sleep(self, seconds: float) -> None:
await self.backend.sleep(seconds)

def time(self) -> float:
return self.backend.time()

Expand Down
3 changes: 3 additions & 0 deletions httpx/backends/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -111,6 +111,9 @@ async def open_uds_stream(
) -> BaseSocketStream:
raise NotImplementedError() # pragma: no cover

async def sleep(self, seconds: float) -> None:
raise NotImplementedError() # pragma: no cover

def time(self) -> float:
raise NotImplementedError() # pragma: no cover

Expand Down
3 changes: 3 additions & 0 deletions httpx/backends/trio.py
Original file line number Diff line number Diff line change
Expand Up @@ -131,6 +131,9 @@ async def open_uds_stream(

raise ConnectTimeout()

async def sleep(self, seconds: float) -> None:
await trio.sleep(seconds)

def time(self) -> float:
return trio.current_time()

Expand Down
68 changes: 65 additions & 3 deletions httpx/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,16 +5,19 @@
import hstspreload

from .auth import Auth, AuthTypes, BasicAuth, FunctionAuth
from .backends.base import ConcurrencyBackend
from .backends.base import ConcurrencyBackend, lookup_backend
from .config import (
DEFAULT_MAX_REDIRECTS,
DEFAULT_POOL_LIMITS,
DEFAULT_RETRIES_CONFIG,
DEFAULT_TIMEOUT_CONFIG,
UNSET,
CertTypes,
PoolLimits,
ProxiesTypes,
Proxy,
Retries,
RetriesTypes,
Timeout,
TimeoutTypes,
UnsetType,
Expand All @@ -33,6 +36,7 @@
RedirectLoop,
RequestBodyUnavailable,
TooManyRedirects,
TooManyRetries,
)
from .models import (
URL,
Expand Down Expand Up @@ -64,6 +68,7 @@ def __init__(
headers: HeaderTypes = None,
cookies: CookieTypes = None,
timeout: TimeoutTypes = DEFAULT_TIMEOUT_CONFIG,
retries: RetriesTypes = DEFAULT_RETRIES_CONFIG,
max_redirects: int = DEFAULT_MAX_REDIRECTS,
base_url: URLTypes = None,
trust_env: bool = True,
Expand All @@ -81,6 +86,7 @@ def __init__(
self._headers = Headers(headers)
self._cookies = Cookies(cookies)
self.timeout = Timeout(timeout)
self.retries = Retries(retries)
self.max_redirects = max_redirects
self.trust_env = trust_env
self.netrc = NetRCInfo()
Expand Down Expand Up @@ -941,6 +947,7 @@ def __init__(
http2: bool = False,
proxies: ProxiesTypes = None,
timeout: TimeoutTypes = DEFAULT_TIMEOUT_CONFIG,
retries: RetriesTypes = DEFAULT_RETRIES_CONFIG,
pool_limits: PoolLimits = DEFAULT_POOL_LIMITS,
max_redirects: int = DEFAULT_MAX_REDIRECTS,
base_url: URLTypes = None,
Expand All @@ -956,6 +963,7 @@ def __init__(
headers=headers,
cookies=cookies,
timeout=timeout,
retries=retries,
max_redirects=max_redirects,
base_url=base_url,
trust_env=trust_env,
Expand Down Expand Up @@ -1106,10 +1114,16 @@ async def send(

timeout = self.timeout if isinstance(timeout, UnsetType) else Timeout(timeout)

retries = self.retries

auth = self.build_auth(request, auth)

response = await self.send_handling_redirects(
request, auth=auth, timeout=timeout, allow_redirects=allow_redirects,
response = await self.send_handling_retries(
request,
auth=auth,
timeout=timeout,
retries=retries,
allow_redirects=allow_redirects,
)

if not stream:
Expand All @@ -1120,6 +1134,54 @@ async def send(

return response

async def send_handling_retries(
self,
request: Request,
auth: Auth,
retries: Retries,
timeout: Timeout,
allow_redirects: bool = True,
) -> Response:
backend = lookup_backend()

delays = retries.get_delays()
retry_flow = retries.retry_flow(request)

# Initialize the generators.
next(delays)
request = next(retry_flow)

while True:
try:
response = await self.send_handling_redirects(
request,
auth=auth,
timeout=timeout,
allow_redirects=allow_redirects,
)
except HTTPError as exc:
logger.debug(f"HTTP Request failed: {exc!r}")
try:
request = retry_flow.throw(type(exc), exc, exc.__traceback__)
except (TooManyRetries, HTTPError):
raise
else:
delay = next(delays)
logger.debug(f"Retrying in {delay} seconds")
await backend.sleep(delay)
else:
try:
request = retry_flow.send(response)
except TooManyRetries:
raise
except StopIteration:
return response
else:
delay = next(delays)
logger.debug(f"Retrying in {delay} seconds")
await backend.sleep(delay)
continue

async def send_handling_redirects(
self,
request: Request,
Expand Down
78 changes: 77 additions & 1 deletion httpx/config.py
Original file line number Diff line number Diff line change
@@ -1,11 +1,13 @@
import itertools
import os
import ssl
import typing
from pathlib import Path

import certifi

from .models import URL, Headers, HeaderTypes, URLTypes
from .models import URL, Headers, HeaderTypes, Request, Response, URLTypes
from .retries import DontRetry, RetryLimits, RetryOnConnectionFailures
from .utils import get_ca_bundle_from_env, get_logger

CertTypes = typing.Union[str, typing.Tuple[str, str], typing.Tuple[str, str, str]]
Expand All @@ -16,6 +18,7 @@
ProxiesTypes = typing.Union[
URLTypes, "Proxy", typing.Dict[URLTypes, typing.Union[URLTypes, "Proxy"]]
]
RetriesTypes = typing.Union[int, "RetryLimits", "Retries"]


DEFAULT_CIPHERS = ":".join(
Expand Down Expand Up @@ -337,6 +340,79 @@ def __repr__(self) -> str:
)


class Retries:
"""
Retries configuration.

Holds a retry limiting policy, and implements a configurable exponential
backoff algorithm.
"""

def __init__(self, *retries: RetriesTypes, backoff_factor: float = None) -> None:
limits: RetriesTypes

if len(retries) == 0:
limits = RetryOnConnectionFailures(3)
elif len(retries) == 1:
limits = retries[0]
if isinstance(limits, int):
limits = (
RetryOnConnectionFailures(limits) if limits > 0 else DontRetry()
)
elif isinstance(limits, Retries):
assert backoff_factor is None
backoff_factor = limits.backoff_factor
limits = limits.limits
else:
raise NotImplementedError(
"Passing a `RetryLimits` subclass as a single argument "
"is not supported. You must explicitly pass the number of times "
"to retry on connection failures. "
"For example: `Retries(3, MyRetryLimits(...))`."
)
elif len(retries) == 2:
default, custom = retries
assert isinstance(custom, RetryLimits)
florimondmanca marked this conversation as resolved.
Show resolved Hide resolved
limits = Retries(default).limits | custom
florimondmanca marked this conversation as resolved.
Show resolved Hide resolved
else:
raise NotImplementedError(
"Composing more than 2 retry limits is not supported yet."
)

if backoff_factor is None:
backoff_factor = 0.2

assert backoff_factor > 0
self.limits: RetryLimits = limits
self.backoff_factor: float = backoff_factor

def __eq__(self, other: typing.Any) -> bool:
return (
isinstance(other, Retries)
and self.limits == other.limits
and self.backoff_factor == other.backoff_factor
)

def get_delays(self) -> typing.Iterator[float]:
"""
Used by clients to determine how long to wait before issuing a new request.
"""
yield 0 # Send the initial request.
yield 0 # Retry immediately.
florimondmanca marked this conversation as resolved.
Show resolved Hide resolved
for n in itertools.count(2):
yield self.backoff_factor * (2 ** (n - 2))
yeraydiazdiaz marked this conversation as resolved.
Show resolved Hide resolved

def retry_flow(self, request: Request) -> typing.Generator[Request, Response, None]:
"""
Used by clients to determine what to do when failing to receive a response,
or when a response was received.

Delegates to the retry limiting policy.
"""
yield from self.limits.retry_flow(request)


DEFAULT_TIMEOUT_CONFIG = Timeout(timeout=5.0)
DEFAULT_RETRIES_CONFIG = Retries(3, backoff_factor=0.2)
DEFAULT_POOL_LIMITS = PoolLimits(soft_limit=10, hard_limit=100)
DEFAULT_MAX_REDIRECTS = 20
Loading