Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to handle ClientResponseError/ServerDisconnectedError properly #850

Closed
inkrement opened this issue Apr 13, 2016 · 6 comments
Closed
Labels

Comments

@inkrement
Copy link

Hi,

It's not a bug, but a question. I am using this library to load a lot of http-files over a proxy. After some successful requests I always get a ServerDisconnectedError.

Is there a way to enable some sort of "auto-reconnect" and what is the recommended technique to handle this error?

Task exception was never retrieved
future: <Task finished coro=<load() done, defined at run.py:21> exception=ServerDisconnectedError()>
Traceback (most recent call last):
  File "/usr/lib/python3.5/asyncio/tasks.py", line 237, in _step
    result = coro.throw(exc)
  File "run.py", line 23, in load
    page = await fetch_page(client, 'http://example.com/{}'.format(channel))
  File "run.py", line 12, in fetch_page
    async with session.get(url, headers={' User-Agent': 'myuseragent'}) as response:
  File "/var/loadViews/env/lib/python3.5/site-packages/aiohttp/client.py", line 538, in __aenter__
    self._resp = yield from self._coro
  File "/var/loadViews/env/lib/python3.5/site-packages/aiohttp/client.py", line 183, in _request
    conn = yield from self._connector.connect(req)
  File "/var/loadViews/env/lib/python3.5/site-packages/aiohttp/connector.py", line 310, in connect
    transport, proto = yield from self._create_connection(req)
  File "/var/loadViews/env/lib/python3.5/site-packages/aiohttp/connector.py", line 689, in _create_connection
    resp = yield from proxy_resp.start(conn, True)
  File "/var/loadViews/env/lib/python3.5/site-packages/aiohttp/client_reqrep.py", line 606, in start
    message = yield from httpstream.read()
  File "/var/loadViews/env/lib/python3.5/site-packages/aiohttp/streams.py", line 591, in read
    result = yield from super().read()
  File "/var/loadViews/env/lib/python3.5/site-packages/aiohttp/streams.py", line 446, in read
    yield from self._waiter
  File "/usr/lib/python3.5/asyncio/futures.py", line 385, in __iter__
    yield self  # This tells Task to wait for completion.
  File "/usr/lib/python3.5/asyncio/tasks.py", line 288, in _wakeup
    value = future.result()
  File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result
    raise self._exception
aiohttp.errors.ServerDisconnectedError
@unixsurfer
Copy link

I had similar problem and I was hoping that aiohttp could have a retry logic on connection errors and possible on HTTP status. Because there isn't just thing yet available I came up with the following which works for REST APIs where json is returned::

HTTP_STATUS_CODES_TO_RETRY = [500, 502, 503, 504]
class FailedRequest(Exception):
    """
    A wrapper of all possible exception during a HTTP request
    """
    code = 0
    message = ''
    url = ''
    raised = ''

    def __init__(self, *, raised='', message='', code='', url=''):
        self.raised = raised
        self.message = message
        self.code = code
        self.url = url

        super().__init__("code:{c} url={u} message={m} raised={r}".format(
            c=self.code, u=self.url, m=self.message, r=self.raised))


async def send_http(session, method, url, *,
                    retries=1,
                    interval=0.9,
                    backoff=3,
                    read_timeout=15.9,
                    http_status_codes_to_retry=HTTP_STATUS_CODES_TO_RETRY,
                    **kwargs):
    """
    Sends a HTTP request and implements a retry logic.

    Arguments:
        session (obj): A client aiohttp session object
        method (str): Method to use
        url (str): URL for the request
        retries (int): Number of times to retry in case of failure
        interval (float): Time to wait before retries
        backoff (int): Multiply interval by this factor after each failure
        read_timeout (float): Time to wait for a response
    """
    backoff_interval = interval
    raised_exc = None
    attempt = 0

    if method not in ['get', 'patch', 'post']:
        raise ValueError

    if retries == -1:  # -1 means retry indefinitely
        attempt = -1
    elif retries == 0: # Zero means don't retry
        attempt = 1
    else:  # any other value means retry N times
        attempt = retries + 1

    while attempt != 0:
        if raised_exc:
            log.error('caught "%s" url:%s method:%s, remaining tries %s, '
                    'sleeping %.2fsecs', raised_exc, method.upper(), url,
                    attempt, backoff_interval)
            await asyncio.sleep(backoff_interval)
            # bump interval for the next possible attempt
            backoff_interval = backoff_interval * backoff
        log.info('sending %s %s with %s', method.upper(), url, kwargs)
        try:
            with aiohttp.Timeout(timeout=read_timeout):
                async with getattr(session, method)(url, **kwargs) as response:
                    if response.status == 200:
                        try:
                            data = await response.json()
                        except json.decoder.JSONDecodeError as exc:
                            log.error(
                                'failed to decode response code:%s url:%s '
                                'method:%s error:%s response:%s',
                                response.status, url, method.upper(), exc,
                                response.reason
                            )
                            raise aiohttp.errors.HttpProcessingError(
                                code=response.status, message=exc.msg)
                        else:
                            log.info('code:%s url:%s method:%s response:%s',
                                    response.status, url, method.upper(),
                                    response.reason)
                            raised_exc = None
                            return data
                    elif response.status in http_status_codes_to_retry:
                        log.error(
                            'received invalid response code:%s url:%s error:%s'
                            ' response:%s', response.status, url, '',
                            response.reason
                        )
                        raise aiohttp.errors.HttpProcessingError(
                            code=response.status, message=response.reason)
                    else:
                        try:
                            data = await response.json()
                        except json.decoder.JSONDecodeError as exc:
                            log.error(
                                'failed to decode response code:%s url:%s '
                                'error:%s response:%s', response.status, url,
                                exc, response.reason
                            )
                            raise FailedRequest(
                                code=response.status, message=exc,
                                raised=exc.__class__.__name__, url=url)
                        else:
                            log.warning('received %s for %s', data, url)
                            print(data['errors'][0]['detail'])
                            raised_exc = None
        except (aiohttp.errors.ClientResponseError,
                aiohttp.errors.ClientRequestError,
                aiohttp.errors.ClientOSError,
                aiohttp.errors.ClientDisconnectedError,
                aiohttp.errors.ClientTimeoutError,
                asyncio.TimeoutError,
                aiohttp.errors.HttpProcessingError) as exc:
            try:
                code = exc.code
            except AttributeError:
                code = ''
            raised_exc = FailedRequest(code=code, message=exc, url=url,
                                    raised=exc.__class__.__name__)
        else:
            raised_exc = None
            break

        attempt -= 1

    if raised_exc:
        raise raised_exc

@inkrement
Copy link
Author

wow. great, Thanks!

@unixsurfer
Copy link

Don't forget to adjust the parsing of the json as it is very specific to my needs.

@kootenpv
Copy link

kootenpv commented May 4, 2018

My question is whether this error is related only to this library, and should thus always be ignored/caught/retried, or whether I should expect that this error could also be due to something server-sided?

@asvetlov
Copy link
Member

asvetlov commented May 4, 2018

The exception is raised when a peer had closed the socket.
It's up to you how to process this situation.

@rogerdahl
Copy link

Try setting the client to tell the server to close the connection after each request:

self._session = aiohttp.ClientSession(
    ...
    headers={"Connection": "close"},
 )

HTTP keep-alive allows the same connection to be used for multiple requests, which can improve performance, especially over HTTPS, where the connection handshake can be lengthy. But the server may have a limit as to how long it's willing to keep the connection alive or how many requests it will accept on the same connection. When the limit runs out, it closes the connection.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

5 participants