Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Applying backpressure to over-eager pipelining clients #1368

Closed
njsmith opened this issue Nov 5, 2016 · 8 comments
Closed

Applying backpressure to over-eager pipelining clients #1368

njsmith opened this issue Nov 5, 2016 · 8 comments
Assignees
Milestone

Comments

@njsmith
Copy link

@njsmith njsmith commented Nov 5, 2016

Long story short

Asyncio seems to continue reading and buffering incoming requests from a client, even while it's still handling the previous request. This is weird and a mild DoS problem, since a client can trivially cause aiohttp's receive buffer to grow to unbounded size.

Expected behaviour

If I try to send lots of requests at an aiohttp server then it should buffer a small amount and then apply backpressure to me.

Actual behaviour

It just queues up requests indefinitely.

Steps to reproduce

Point this client at an aiohttp server:

import sys, socket
host, port = sys.argv[1], int(sys.argv[2])
with socket.create_connection((host, port)) as sock:
    get = b"GET / HTTP/1.1\r\nHost: " + host.encode("ascii") + b"\r\n\r\n"
    requests_sent = 0
    while True:
        sock.sendall(get)
        requests_sent += 1
        if requests_sent % 1000 == 0:
            print("Sent {} requests", requests_sent)

Ideally after some time the number of requests sent should stop increasing because sock.sendall starts blocking, and the aiohttp server's memory usage should stop increasing.

@fafhrd91

This comment has been minimized.

Copy link
Member

@fafhrd91 fafhrd91 commented Nov 5, 2016

I am planing to work on this task

Sent from my iPhone

On Nov 4, 2016, at 6:44 PM, Nathaniel J. Smith notifications@github.com wrote:

Long story short

Asyncio seems to continue reading and buffering incoming requests from a client, even while it's still handling the previous request. This is weird and a mild DoS problem, since a client can trivially cause aiohttp's receive buffer to grow to unbounded size.

Expected behaviour

If I try to send lots of requests at an aiohttp server then it should buffer a small amount and then apply backpressure to me.

Actual behaviour

It just queues up requests indefinitely.

Steps to reproduce

Point this client at an aiohttp server:

import sys, socket
host, port = sys.argv[1], int(sys.argv[2])
with socket.create_connection((host, port)) as sock:
get = b"GET / HTTP/1.1\r\nHost: " + host.encode("ascii") + b"\r\n\r\n"
requests_sent = 0
while True:
sock.sendall(get)
requests_sent += 1
if requests_sent % 1000 == 0:
print("Sent {} requests", requests_sent)
Ideally after some time the number of requests sent should stop increasing because sock.sendall starts blocking, and the aiohttp server's memory usage should stop increasing.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.

@fafhrd91 fafhrd91 self-assigned this Nov 22, 2016
@pfreixes

This comment has been minimized.

Copy link
Contributor

@pfreixes pfreixes commented Jun 8, 2017

any news about that?

@fafhrd91

This comment has been minimized.

Copy link
Member

@fafhrd91 fafhrd91 commented Jun 8, 2017

we need champion for this issue. fix is relatively simple.

@zmedico

This comment has been minimized.

Copy link

@zmedico zmedico commented Jun 8, 2017

Some useful configuration parameters:

  • Individual POST size limits.
  • Cumulative concurrent POST size limits.
  • Concurrent connection limits. Allow the kernel to queue new connections and eventually refuse them when it grows too large (kernel queue size controlled by net.core.somaxconn sysctl on Linux).
  • For the connections that have been accepted, limit accepting new connections from the same client address, with separate limits to trigger queuing or rejecting of new connections from the same client address.
  • Limit the combined number of concurrently accepted connections from all clients.
@fafhrd91

This comment has been minimized.

Copy link
Member

@fafhrd91 fafhrd91 commented Jun 8, 2017

first two points are implemented on aiohttp.web level, other points are not related to this issue.
this one is about one client sending too much pipelined http requests

@asvetlov

This comment has been minimized.

Copy link
Member

@asvetlov asvetlov commented Feb 27, 2018

We are going to make every write an async function.
#2698

@fafhrd91

This comment has been minimized.

Copy link
Member

@fafhrd91 fafhrd91 commented Feb 27, 2018

we just need to limit number of in-flight requests

@asvetlov

This comment has been minimized.

Copy link
Member

@asvetlov asvetlov commented Jan 3, 2020

The issue is fixed in aiohttp 3.x line.
Sorry for keeping it open for a long time

@asvetlov asvetlov closed this Jan 3, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
5 participants
You can’t perform that action at this time.