-
Notifications
You must be signed in to change notification settings - Fork 0
Description
Unsure if this is blocked: it's possible this might be solved when you read this if httpx reproduces/solves the issue and/or can be addressed in the library itself https://chatgpt.com/share/684b49d8-ff70-8002-b91c-c817826f2a68
Motivation:
HTTP2 was released in 2015 (now being deprecated in favor of HTTP3...) and has many advantages compared to HTTP1.1 including but not limited to
which precipitated us to migrate to httpx from aiohttp (this migration was also due to unresolved issues in task cleanup #70 and aio-libs/aiohttp#7551) in #71 (as httpx in theory supports http2...more on this later) and since aiohttp only supports http1.1 (aio-libs/aiohttp#5631 how is this possible? It's 2025 and this is a core library).
Problem
We realized that when setting HTTP2 to true we kept running into errors such as the following
Running:
❯ uv sync
❯ pytest tests/test_performance_tests.py::test_benchmark_hamt_store
Test Results:
FAILED tests/test_performance_tests.py::test_benchmark_hamt_store - httpx.RemoteProtocolError: <ConnectionTerminated error_code:0, last_stream_id:1999, additional_data:None>
which results from the fact that the number of requests per connection reaches the NGINX server limit of 1000 and closes...without creating another. There is discussion here encode/httpx#2112 regarding how httpx should renew connections with solutions varying from people disabling http2 alltogether or by retrying the httpx requests with tenacity. (aka needing to manually maintain these retries - this is insane that this needs to be done). We tried a few ways of manually handling sessions but this shouldn't be within py-hamt but rather handled under the hood in the httpx library.
The latest discussion here is someone else reproducing the same issue encode/httpx#3549 There is also a ticket here keeping track of HTTP2 robustness encode/httpx#3324 This should be gracefully handled.
Solution
Either our py-hamt library should handle the request limits & connections (gross - also this leads to dealing with requests that are in flight making sure to not cancel them, await them, while also doing retries) or better, the httpx library itself should be handling these http2 stateful requests/connections.
ChatGPT suggested increasing the limits on max_connections would work allowing it to cycle but we didn't see that.
Allegedly
Lower the concurrency floor so you never hit 2000 open streams (e.g. Limits(max_connections=1500))
--
No GOAWAY → no race window. Usually the simplest fix.
Conversation here https://chatgpt.com/share/684b49d8-ff70-8002-b91c-c817826f2a68