-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
aiohttp client get request - exception without error message #5239
Comments
@esundberg |
I get a asyncio.TimeoutError I didn't include lines 1-48 from the output
|
If needed. I can spin up a server for this issue for troubleshooting if you can't reproduce it in your environment. It's just a very odd issue. Also Ubuntu 20.04 has not release a update to python 3.8.6. So, I am kind of stuck at 3.8.5. The deadsnake ppa will not let you update any 3.8 train when your on Ubuntu 20.04 I tried Python 3.9.0+and I am having the same issue. |
It looks like you hit the timeout of 3 seconds. |
I updated the code like this. So this is even weirder. Yes it time's out a 3 second. This is weird... It not goes in block of ~50 get asnwered in ~< second, next 50 get answered in 4 seconds, the next 50 in 8-9 seconds, the next 50 in 13 seconds, then the next 50 in 17 seconds, then timeout errors.
Output
|
I put a timer's in for each step of the loop. Looks like it's the get statement that is not completing. Seeing a lot of missing Step3 messages. Also I tried this on my PC and it works fine, the duration for the request is constantly under .01 seconds
Output
|
Those steps look like it hits some barriers. @asvetlov from your experience, does these steps look like that there is too many coroutines are scheduled on the loop? @esundberg you can also try to run your script with uvloop. May be it will increase the number of requests that will not hit the timeout. |
@earlbread |
Ok i will try uvloop. It might be a bit of a learning curve because I have never used uvloop before. |
Kind of the same results with uvloop, but no timeout error. Except the code runs the ~50 task, then pauses, then completes the next 50 tasks in 4 seconds, followed by the next 50 tasks in 9 seconds. Also if I try a ctrl+c it takes a 2-3 minutes to cause python to exit. Here is a video of it https://youtu.be/RZPnD4ZgGsw Baby dinosaur sounds in the background, my baby is playing next to me in the pack in play :) |
So this is going to be DNS related. It appears that the DNS server is sending a dns response and the socket used for the DNS lookup is not open, so the ubuntu server is sending back a ICMP Destination unreachable(Port Unreachable) If I re run the script and use a IP Address for the url instead of a FQDN, it run just fine with no delays or errors. Is there away to have aiohttp use a cached dns record instead of preforming a DNS lookup each time aiohttp client needs to send a query? |
I installed aiohttp[speedups] and the same thing is happening. Is there any thing special that I need to do to make sure aiodns is used? |
So I did the following and it seams to fix the issue. This Works using 8.8.8.8 and 8.8.4.4 as the dns server. All responses are in order.
Same code but using Digital Ocean DNS Servers of 67.207.67.3 and 67.207.67.2. I get some really weird delays in responses. Wondering if they have a rate limiter on there DNS servers?
|
If you hit to the DNS rate limit, you can install a local caching DNS on your machine and use it for resolving. |
I have a lot of servers running aiohttp so I just setup two local bind servers to handle the dns requests. Thanks for the help, it was really weird symptoms for a DNS issue with no errors pointing towards DNS. |
馃悶 Describe the bug
I have been having problems with a section of my code that uses aiohttp client to request phone number information from a API. The API Server has been tested to well over 600 queries per second and it's two servers behind a load balancer.
The program goes like this. We query a FastAPI Endpoint, the FastAPI Endpoint runs a function that queries another API using aiohttp client, processes the data, and then returns the result. We were noticing under some some load that aiohttp would create an exception without any errors. I have narrowed it down to the following section of code where the issues is and removed FastAPI from the mix.
I created a infinite loop that creates an asyncio task that calls the same function that does the aiohttp client request over and over again, sleeping for .01 seconds per per loop cycle. I see 51 aiohttp requests go out and receive good responses, then from session 52 onwards we get a aiohttp exception with no errors.
I am also getting different results on different environments.
Digital Ocean Server (shared 8vCPU, 16G RAM or dedicated 16 CPU, 65G RAM) - Ubuntu 20.04 python 3.8.5, aiohttp - 3.7.2, yarl, 1.6.3, multidict 5.0.2 - Issue Happens
-Side note CPU only gets to 6-7%
Digital Ocean Server (shared 4vCPU, 8G RAM) - Ubuntu 18.04, python 3.8.6, aiohttp -3.7.2, yarl 1.6.3, multidict 5.0.2 - No Issue
My PC - Windows 10, Python 3.8.2, aiohttp - 3.6.3, yarl, 1.5.1, multidict 4.7.6 - No Issue
VMWare Server- Ubuntu 18.05 (4 vCPU, 8G RAM) - Python 3.8.6, aiohttp 3.7.2, yarl 1.6.2, multidict, 5.0.0 - No Issue
VMWare Server - Ubuntu 20.04 (4 vCPU, 16G RAM) - Python 3.8.5 aiohttp 3.7.2, yarl 1.6.2, multidict 5.0.0 - No Issue
Output
馃挕 To Reproduce
Run the code above
Running on Digitial Ocean Server
Ubuntu 20.04.1
Python 3.8.5
aiohttp==3.7.2
馃挕 Expected behavior
AIOHTTP requests should not stop
馃搵 Logs/tracebacks
馃搵 Your version of the Python
馃搵 Your version of the aiohttp/yarl/multidict distributions
馃搵 Additional context
This is a pretty big bug for us. Any help would be greatly appreacited.
The text was updated successfully, but these errors were encountered: