-
-
Notifications
You must be signed in to change notification settings - Fork 31.7k
Socket accept exhaustion during high TCP traffic #72093
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
My organization noticed this issue after launching several asyncio services that would receive either a sustained high number of incoming connections or regular bursts of traffic. Our monitoring showed a loss of between 4% and 6% of all incoming requests. On the client side we see a socket read error "Connection reset by peer". On the asyncio side, with debug turned on, we see nothing. After some more investigation we determined asyncio was not calling 'accept()' on the listening socket fast enough. To further test this we put together several hello-world type examples and put them under load. I've attached the project we used to test. Included are three docker files that will run the services under different configurations. One runs the service as an aiohttp service, the other uses the aiohttp worker behind gunicorn, and the third runs the aiohttp service with the proposed asyncio patch in place. For our testing we used 'wrk' to generate traffic and collect data on the OS/socket errors. For anyone attempting to recreate our experiments, we ran a three test batteries against the service for each endpoint using: wrk --duration 30s --timeout 10s --latency --threads 2 --connections 10 <URL> The endpoints most valuable for us to test were the ones that replicated some of our production logic: <URL>/ # Hello World Our results varied based on the available CPU cycles, but we consistently recreate the socket read errors from production using the above tests. Our proposed solution, attached as a patch file, is to put the socket.accept() call in a loop that is bounded by the listening socket's backlog. We use the backlog value as an upper bound to prevent the reverse situation of starving active coroutines while the event loop continues to accept new connections without yielding. With the proposed patch in place our loss rate disappeared. For further comparison, we reviewed the socket accept logic in Twisted against which we ran similar tests and encountered no loss. We found that Twisted already runs the socket accept in a bounded loop to prevent this issue (https://github.com/twisted/twisted/blob/trunk/src/twisted/internet/tcp.py#L1028). |
Attaching the patch file |
Thanks -- IIRC this was even brought up during asyncio's early implementation phase, but we didn't have enough experience to warrant the extra logic. It seems like now is the time to do this! I hope you won't need it for 3.4, because that version is out of support. However I would like to have this upstream in https://github.com/python/asyncio. Is there any chance you can write a unit test for this functionality? Or is the only way to test it really to do a load test? |
I'll dig into the existing asyncio unit tests and see what I can come up with. I'm not sure, yet, exactly what I might test for. The variables involved with reproducing the error are mostly environmental. CPU speed of the host, amount of CPU bound work happening in handler coroutines, and the rate of new connections are the major contributors we've identified. I'm not sure how I might simulate those in a unit test. Would it be sufficient to add a test that ensures the _accept_connection calls .accept() on the listening socket 'backlog' number of times in event there are no OS errors? |
Yeah, the tests are often full of mocks, and I trust that the test |
I've added a unit test to the patch that asserts sock.accept() is called the appropriate number of times. Worth a note, I had to adjust one of the existing tests to account for the new backlog argument. There is a default value for the argument to preserve backwards compat for any callers, but the mock used in the test was not tolerant of having an extra arg available. |
Added a comment to the .accept() loop with a reference to the issue. |
I'll try to get to this during the core dev sprint next week. |
Sorry, we didn't get to this. @yury, do you think you could get to this before b1 goes out? It's a pure optimization (and a good one!). |
LGTM. Will commit tomorrow. |
(Of course I meant b2.) |
Hum, I'm concerned by this issue. I would expect that in debug log, a log would be emitted. Maybe something like "accept queue saturated! we may probably loose incoming connections". Does someone see a way to log a message in such case? Maybe only in debug mode? I spent time to make sure that asyncio logs events related to network to try to understand what is happening. It's hard to decide which events should be logged, because logging everything is likely to flood logs. For example, _SelectorSocketTransport._read_ready() logs an event for
|
I'm not sure we can do anything here. @kevinconway, did you try to find out if it's possible to log this somehow? |
Maybe with the new code it could log a message if it doesn't exit the |
New changeset 1dcfafed3cb0 by Yury Selivanov in branch '3.5': New changeset 62948164ff94 by Yury Selivanov in branch '3.6': New changeset ac13bf1967b5 by Yury Selivanov in branch 'default': |
Guido van Rossum: "Maybe with the new code it could log a message if Hum, yes maybe. But I would suggest to only log a maximum message per |
This seems fixed, anything left to do here? |
Since a PR was merged that claims to address this, I will just close the issue and remove the pending label. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: