-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
implement backpressure for stream requests #1
Comments
We cannot exactly "split" arbitrary streams in Elixir (see this issue) because of the lack of built-in thunks and that streams tend to wrap stateful resources like files. I would say that this is accomplishable with piping |
the other approach I think has legs is to detect that the message is going to exceed a window and synchronously waiting in the Enum.reduce_while(messages, {:ok, conn}, fn message, {:ok, conn} ->
{wire_data, byte_size} = Request.to_wire_data(message)
connection_window = Mint.HTTP2.get_window_size(conn, :connection)
request_window = Mint.HTTP2.get_window_size(conn, {:request, request_ref})
smaller_window = min(connection_window, request_window)
with false <- byte_size > smaller_window,
{:ok, conn} <- Mint.HTTP.stream_request_body(conn, request_ref, wire_data) do
{:cont, {:ok, conn}}
else
true -> get_until_window_increase(conn, smaller_window)
error -> {:halt, error}
end
end) with defp get_until_window_increase(conn, smaller_window) do
# TODO handle `responses`
with {:ok, conn, responses} <- Mint.HTTP.recv(conn, 0, 5_000),
connection_window = Mint.HTTP2.get_window_size(conn, :connection),
request_window = Mint.HTTP2.get_window_size(conn, {:request, request_ref}),
new_smaller_window when new_smaller_window > smaller_window <-
min(connection_window, request_window) do
{:cont, {:ok, conn}}
else
^smaller_window -> get_until_window_increase(conn, smaller_window)
error -> {:halt, error}
end
end In either case, the connection may receive messages for other |
looks like this is also an issue in the finch implementation https://github.com/keathley/finch/issues/88 |
worth checking if this is just the EventStore telling the client to stop trying to send so much data, but it appears as though the EventStore is properly telling the client to expand window size in the connection (stream 0) and also later in the request (stream 5) but as the code is not directing mint to listen for these, mint is not magically adjusting the window so it appears as though this is standard HTTP2 request back-pressure |
the |
this is the small refactor alluded to in #1 (comment)
See #3 for the implementation that 🔪d this instead of synchronously blocking in the
this has the potential for hanging a suspended stream if the server never replenishes our window, but luckily Spear is only written to interact with one kind of server: an EventStoreDB, which we can show to be conformant with proper window refill behavior. I.e. I think it's an accetable risk |
currently
Spear.append/4
with an infinite stream fails (expected) but not for the expected reason (GenServer timeout)instead it gives
Currently the setup for all out-bound requests is like so:
and
stream_body/3
is implemented like so:As it turns out, this is actually very similar to the streaming implementation in finch! (PR that added streaming support: https://github.com/keathley/finch/pull/107/files#diff-48431cc1d91063480b5006d7585c96ea39433e319aca2b5e3a6c597fdbd7e10fR153-R158)
If we add some
IO.inspect/2
s of the window sizes in thatEnum.reduce_while/3
we can see the window size for the connection and request gradually decreasing down to (in this case) 26, which is not enough to send the next message.
We can cheaply detect the window sizes as we reduce, but it's not immediately clear how to halt the stream temporarily while we
Mint.HTTP.stream/2
and await a WINDOW_UPDATE frame (once we realize that our window is not large enough).The text was updated successfully, but these errors were encountered: