Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aiohttp.client_exceptions.ClientOSError: [Errno None] Can not write request body #6138

Open
1 task done
harshitsinghai77 opened this issue Oct 27, 2021 · 20 comments
Open
1 task done
Labels
bug client reproducer: missing This PR or issue lacks code, which reproduce the problem described or clearly understandable STR

Comments

@harshitsinghai77
Copy link

harshitsinghai77 commented Oct 27, 2021

Describe the bug

File "/usr/local/lib/python3.7/site-packages/aiohttp/streams.py", line 604, in read
    await self._waiter
aiohttp.client_exceptions.ClientOSError: [Errno None] Can not write request body for

To Reproduce

from config import (
    CLICKHOUSE_USER,
    CLICKHOUSE_PORT,
    CLICKHOUSE_PASSWORD,
    CLICKHOUSE_HOST,
)

from aiochclient import ChClient
import aiohttp

class ClickhouseConnection:
    _session = None
    _client = None

    @classmethod
    async def create_connection(cls):
        connector = aiohttp.TCPConnector(limit=30)
        cls._session = aiohttp.ClientSession(connector=connector)
        cls._client = ChClient(
            session=cls._session,
            url=f"http://{CLICKHOUSE_HOST}:{CLICKHOUSE_PORT}",
            user=CLICKHOUSE_USER,
            password=CLICKHOUSE_PASSWORD,
            database="cliff",
        )

    @classmethod
    async def create_intermediate_roll_up_table(cls, table_name, dimensions, measures):
        create_table_query = f"MY QUERY"
        await cls._client.execute(create_table_query)

    @classmethod
    async def add_bulk_data_to_rollup_table(cls, columns, table_name, data_list):
        insert_statement = "MY STATEMENT"
        await cls._client.execute(insert_statement, *data_list)
        
    @classmethod
    async def execute_query(cls, query, execute_many=False, as_dict=False):
        if execute_many:
            return await cls._client.fetch(query=query, json=as_dict)
        return await cls._client.fetchrow(query=query)

    @classmethod
    async def optimize_clickhouse_table(cls, table_name: str):
        optimize_query = f"OPTIMIZE TABLE {table_name} FINAL DEDUPLICATE;"
        await cls._client.execute(optimize_query)

    @classmethod
    async def gracefull_close_clikchouse_connection(cls):
        await cls._session.close()
        await cls._client.close()
        LOGGER.info("Closed all conenctions")

Expected behavior

I'm using these methods again and again inside a for loop.

These work most of the time but sometimes aiohttp throws an error.

Logs/tracebacks

File "/source/taa_utils/clickhouse_utils.py", line 100, in create_intermediate_roll_up_table
    await cls._client.execute(create_table_query)
  File "/usr/local/lib/python3.7/site-packages/aiochclient/client.py", line 230, in execute
    query, *args, json=json, query_params=params, query_id=query_id
  File "/usr/local/lib/python3.7/site-packages/aiochclient/client.py", line 189, in _execute
    url=self.url, params=params, data=data
  File "/usr/local/lib/python3.7/site-packages/aiochclient/http_clients/aiohttp.py", line 38, in post_no_return
    async with self._session.post(url=url, params=params, data=data) as resp:
  File "/usr/local/lib/python3.7/site-packages/aiohttp/client.py", line 1117, in __aenter__
    self._resp = await self._coro
  File "/usr/local/lib/python3.7/site-packages/aiohttp/client.py", line 544, in _request
    await resp.start(conn)
  File "/usr/local/lib/python3.7/site-packages/aiohttp/client_reqrep.py", line 890, in start
    message, payload = await self._protocol.read()  # type: ignore
  File "/usr/local/lib/python3.7/site-packages/aiohttp/streams.py", line 604, in read
    await self._waiter
aiohttp.client_exceptions.ClientOSError: [Errno None] Can not write request body for

Python Version

$ python --version
Python 3.7.0

aiohttp Version

$ python -m pip show aiohttp
Name: aiohttp
Version: 3.7.4.post0

multidict Version

$ python -m pip show multidict
Name: multidict
Version: 5.1.0
Summary: multidict implementation

yarl Version

$ python -m pip show yarl
Name: yarl
Version: 1.6.3
Summary: Yet another URL library

OS

Linux Debian

Related component

Client

Additional context

No response

Code of Conduct

  • I agree to follow the aio-libs Code of Conduct
@OmerErez
Copy link

OmerErez commented Jan 9, 2022

I'm having the exact same issue with python 3.7.4, aiohttp 3.5.4, multidict 4.5.2, yarl 1.3.0

Is there any solution?

@harshitsinghai77
Copy link
Author

This only happens when we query the database inside a for loop...

Anyhow, I switched it back to the official clickhouse python driver. Which is synchronous in nature, but gets the job done.

@OmerErez
Copy link

OmerErez commented Jan 9, 2022

It doesn't happen to me while using CH.
I get the exact same error, also using ClientSession,
But when I use regular HTTP requests (session.post).
It also happens only to a portion of the requests.

@webknjaz
Copy link
Member

aiohttp 3.7 is EOL and won't get any update. Is this happening under aiohttp 3.8?

@webknjaz webknjaz added the reproducer: missing This PR or issue lacks code, which reproduce the problem described or clearly understandable STR label Jan 10, 2022
@webknjaz
Copy link
Member

Also, try asking that library you use (aiochclient). Maybe they pass invalid args to aiohttp.

  File "/usr/local/lib/python3.7/site-packages/aiochclient/http_clients/aiohttp.py", line 38, in post_no_return
    async with self._session.post(url=url, params=params, data=data) as resp:

There's not enough information provided to guess what's happening but w/o understanding what exactly is passed, it's a lost cause. We need an aiohttp-only reproducer demonstrating that this problem actually exists. Without that, we'll probably have to just close this as it does not demonstrate a bug the way it is reported.

Current judgment ­— this is likely a problem in that third-party library, maybe they misuse aiohttp.

@OmerErez
Copy link

OmerErez commented Jan 10, 2022

I wasn't using aiochclient, but straight forward aiohttp.
With it, I would send http requests to an nginx that proxies me to different containers (faas).

I was able to solve the issue, by looking at the nginx logs at the same time I would receive those exceptions in my app, and see that I receive these errors:

[alert] 7#7: 1024 worker_connections are not enough
[alert] 7#7: *55279 1024 worker_connections are not enough while connecting to upstream

To solve this, with a little help from Google,
I added to my nginx.conf file:
events {
worker_connections 10000;
}

Thanks anyways!

@RohithBhandaru
Copy link

I'm also getting this error, although for minor chunk of requests under a for loop. I'm using aiohttp 3.8.1.

@beesaferoot
Copy link

beesaferoot commented Jun 30, 2022

Hello, we are currently facing this issue where we have repeating jobs that run at intervals; each job makes some request (mostly POST requests).

This has been happening ever since we migrated to aiohttp, a fix was to use aiohttp.TCPConnector(force_close=True) or by using http1.0 aiohttp.ClientSession(version=http.HttpVersion10) but we had like to reuse connections without force closing for every request.

From my investigation on the network side it shows that the client fails return a companing ACK packet after already exchanging a FIN and FIN ACK packet to the server which results in the server sending a RST packet as a way to graceful close the connection.

version: aiohttp==3.8.1

Any help to resolving this would be appreciated.

@galaxyfeeder
Copy link
Contributor

galaxyfeeder commented Aug 17, 2022

We are also facing this issue, it happens from time to time. We haven't investigated as far as @beesaferoot.

Version: aiohttp==3.8.1
Python: 3.10.4

@jeremy010203
Copy link

Hello, we also have the problem on our application (~20 req/s) for 1 every ~500 to 1000 requests.
setting the TCPConnector and/or Http version didn't solved the issue.
The fix for us was to catch the exception and retry for now.

Python 3.9 and aiohttp 3.8.1

@openspeedtest
Copy link

import asyncio
import io
import os

import aiohttp
from tqdm.asyncio import tqdm


URL = 'http://your-ip:3000/upload'


async def chunks(data, chunk_size):
    with tqdm.wrapattr(io.BytesIO(data), 'read', total=len(data)) as f:
        chunk = f.read(chunk_size)
        while chunk:
            yield chunk
            chunk = f.read(chunk_size)


async def download(session, chunk_size):
    data_to_send = os.urandom(30_000_000)
    data_generator = chunks(data_to_send, chunk_size)
    await session.post(URL, data=data_generator)

        
async def main():
    async with aiohttp.ClientSession() as session:
        tasks = [] 
        for _ in range(5):
            t = asyncio.create_task(download(session, 4096))
            tasks.append(t)
        await asyncio.gather(*tasks)
            

asyncio.run(main())

I am trying to make a CLI client for OpenSpeedTest-Server
I am getting same error like this.
to reproduce this use our DOCKER IMAGE or Android App.
then make a post request to "http://your-ip:3000/upload"
issues :
For docker image it will only send first chunk
for Android app it will throw error like this.

Traceback (most recent call last):
  File "r.py", line 35, in <module>
    asyncio.run(main())
  File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "r.py", line 32, in main
    await asyncio.gather(*tasks)
  File "r.py", line 23, in download
    await session.post(URL, data=data_generator)
  File "/Users/goredplanet/Library/Python/3.8/lib/python/site-packages/aiohttp/client.py", line 559, in _request
    await resp.start(conn)
  File "/Users/goredplanet/Library/Python/3.8/lib/python/site-packages/aiohttp/client_reqrep.py", line 898, in start
    message, payload = await protocol.read()  # type: ignore[union-attr]
  File "/Users/goredplanet/Library/Python/3.8/lib/python/site-packages/aiohttp/streams.py", line 616, in read
    await self._waiter
aiohttp.client_exceptions.ClientOSError: [Errno 32] Broken pipe

It is working fine when using Electron Apps of OpenSpeedTest-Server (Windows, Mac and Linux GUI Server apps)
it uses Express server.

Mobile Apps uses iOnic WebServer, for Android it's NanoHTTP Server and for iOS it is GDC WebServer.
for Docker we use Nginx WebServer. Configuration posted on my profile.

@bralbral
Copy link

bralbral commented Dec 16, 2022

Same.

python: 3.10
aiohttp: 3.8.3
aiochclient: 2.2.0

@DaemonSnake
Copy link

@asvetlov any news ?

@KnownBlackHat
Copy link

it's been 3yrs can we get any update??

@KnownBlackHat
Copy link

KnownBlackHat commented Aug 28, 2023

Hello, we are currently facing this issue where we have repeating jobs that run at intervals; each job makes some request (mostly POST requests).

This has been happening ever since we migrated to aiohttp, a fix was to use aiohttp.TCPConnector(force_close=True) or by using http1.0 aiohttp.ClientSession(version=http.HttpVersion10) but we had like to reuse connections without force closing for every request.

From my investigation on the network side it shows that the client fails return a companing ACK packet after already exchanging a FIN and FIN ACK packet to the server which results in the server sending a RST packet as a way to graceful close the connection.

version: aiohttp==3.8.1

Any help to resolving this would be appreciated.

@beesaferoot could you provide me the reproduction code, i will try to make an pr fixing this if i can solve this issue but for that i need a code which produce this constantly

@Dreamsorcerer
Copy link
Member

There is no update. If someone can create a PR with a test that reproduces the error, then we can look into it, but we really don't have the time to try and figure anything out from the above comments.

#6138 (comment) suggests that the receiving end ran out of connections and so the connection got rejected (if that's the case, I'm not really sure there's a bug here...).

While #6138 (comment) suggests that there could be an issue with keep-alive connections (which makes it sound like a different issue to the previous comment...). If we can get a test that reproduces these steps, then maybe we can fix something..

@KnownBlackHat
Copy link

So in my case this error was not of this library it's cloudflare which has max file size upload per request.

I think, whoever is getting this error, the reason is that the website you are making POST request is using cloudflare, so the it's upload limit implies too.

@skrcka
Copy link

skrcka commented Nov 29, 2023

I was getting this issue when repeating requests in short period of time.

In my case manually clossing session after every request helped.

@fkurushin
Copy link

fkurushin commented Jan 31, 2024

I have fixed this bug by creating try except block in a while loop with sleep and retry:

# init
 conn = aiohttp.TCPConnector(limit_per_host=30)
        self.__session = aiohttp.ClientSession(
            self.__url,
            # timeout=self.__timeout,
            raise_for_status=True,
            connector=conn,
        )
        
  # method
  ids_info = None
  retries = 0
  while not ids_info:
      try:
          async with (
              self.__session.get(
                  self.__path, json={"ids": ids}
              ) as response
          ):
              if response.status == 200:
                  data = await response.json(content_type="text/plain")
                  ids_info = data["info"]
                  if not ids_info:
                      return dict()
                  else:
                      return ids_info
              #     if not 200
              else:
                  return dict()
      except ClientOSError as e:
          logger.exception(f"retry number={retries} with error: {e}")
          retries += 1
          if retries >= self.__max_retries:
              return dict()
          await asyncio.sleep(1)

but I do not think it is proper way. The main thing I have noticed, that I this error occurs at a random time, so I can not reproduce it.

@dmdhrumilmistry
Copy link

I faced this issue while I was trying to proxy my requests to a server and I figured it out that proxy server wasn't able handle that amount of requests. It could be that others are facing same kind of issue. Maybe try rate limiting your requests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug client reproducer: missing This PR or issue lacks code, which reproduce the problem described or clearly understandable STR
Projects
None yet
Development

No branches or pull requests