New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
aiohttp.client_exceptions.ClientOSError: [Errno None] Can not write request body #6138
Comments
I'm having the exact same issue with python 3.7.4, aiohttp 3.5.4, multidict 4.5.2, yarl 1.3.0 Is there any solution? |
This only happens when we query the database inside a for loop... Anyhow, I switched it back to the official clickhouse python driver. Which is synchronous in nature, but gets the job done. |
It doesn't happen to me while using CH. |
aiohttp 3.7 is EOL and won't get any update. Is this happening under aiohttp 3.8? |
Also, try asking that library you use (aiochclient). Maybe they pass invalid args to aiohttp. File "/usr/local/lib/python3.7/site-packages/aiochclient/http_clients/aiohttp.py", line 38, in post_no_return
async with self._session.post(url=url, params=params, data=data) as resp: There's not enough information provided to guess what's happening but w/o understanding what exactly is passed, it's a lost cause. We need an aiohttp-only reproducer demonstrating that this problem actually exists. Without that, we'll probably have to just close this as it does not demonstrate a bug the way it is reported. Current judgment — this is likely a problem in that third-party library, maybe they misuse aiohttp. |
I wasn't using aiochclient, but straight forward aiohttp. I was able to solve the issue, by looking at the nginx logs at the same time I would receive those exceptions in my app, and see that I receive these errors: [alert] 7#7: 1024 worker_connections are not enough To solve this, with a little help from Google, Thanks anyways! |
I'm also getting this error, although for minor chunk of requests under a for loop. I'm using aiohttp 3.8.1. |
Hello, we are currently facing this issue where we have repeating jobs that run at intervals; each job makes some request (mostly POST requests). This has been happening ever since we migrated to aiohttp, a fix was to use From my investigation on the network side it shows that the client fails return a companing version: Any help to resolving this would be appreciated. |
We are also facing this issue, it happens from time to time. We haven't investigated as far as @beesaferoot. Version: |
Hello, we also have the problem on our application (~20 req/s) for 1 every ~500 to 1000 requests. Python 3.9 and aiohttp 3.8.1 |
I am trying to make a CLI client for OpenSpeedTest-Server
It is working fine when using Electron Apps of OpenSpeedTest-Server (Windows, Mac and Linux GUI Server apps) Mobile Apps uses iOnic WebServer, for Android it's NanoHTTP Server and for iOS it is GDC WebServer. |
Same.
|
@asvetlov any news ? |
it's been 3yrs can we get any update?? |
@beesaferoot could you provide me the reproduction code, i will try to make an pr fixing this if i can solve this issue but for that i need a code which produce this constantly |
There is no update. If someone can create a PR with a test that reproduces the error, then we can look into it, but we really don't have the time to try and figure anything out from the above comments. #6138 (comment) suggests that the receiving end ran out of connections and so the connection got rejected (if that's the case, I'm not really sure there's a bug here...). While #6138 (comment) suggests that there could be an issue with keep-alive connections (which makes it sound like a different issue to the previous comment...). If we can get a test that reproduces these steps, then maybe we can fix something.. |
So in my case this error was not of this library it's cloudflare which has max file size upload per request. I think, whoever is getting this error, the reason is that the website you are making |
I was getting this issue when repeating requests in short period of time. In my case manually clossing session after every request helped. |
I have fixed this bug by creating try except block in a while loop with sleep and retry: # init
conn = aiohttp.TCPConnector(limit_per_host=30)
self.__session = aiohttp.ClientSession(
self.__url,
# timeout=self.__timeout,
raise_for_status=True,
connector=conn,
)
# method
ids_info = None
retries = 0
while not ids_info:
try:
async with (
self.__session.get(
self.__path, json={"ids": ids}
) as response
):
if response.status == 200:
data = await response.json(content_type="text/plain")
ids_info = data["info"]
if not ids_info:
return dict()
else:
return ids_info
# if not 200
else:
return dict()
except ClientOSError as e:
logger.exception(f"retry number={retries} with error: {e}")
retries += 1
if retries >= self.__max_retries:
return dict()
await asyncio.sleep(1) but I do not think it is proper way. The main thing I have noticed, that I this error occurs at a random time, so I can not reproduce it. |
I faced this issue while I was trying to proxy my requests to a server and I figured it out that proxy server wasn't able handle that amount of requests. It could be that others are facing same kind of issue. Maybe try rate limiting your requests. |
Describe the bug
To Reproduce
Expected behavior
I'm using these methods again and again inside a for loop.
These work most of the time but sometimes
aiohttp
throws an error.Logs/tracebacks
Python Version
aiohttp Version
multidict Version
yarl Version
OS
Linux Debian
Related component
Client
Additional context
No response
Code of Conduct
The text was updated successfully, but these errors were encountered: