Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Weird unidi streams behavior #209

Open
essen opened this issue Aug 30, 2023 · 9 comments
Open

Weird unidi streams behavior #209

essen opened this issue Aug 30, 2023 · 9 comments
Labels
backlog enhancement New feature or request

Comments

@essen
Copy link

essen commented Aug 30, 2023

For HTTP/3 I am expected to receive from at least 3 unidi streams.

I wrote a CT test that looks like this:

unidi_create_critical_first(Config) →
    {ok, Conn} = quicer:connect("localhost", config(port, Config),
        #{alpn ⇒ ["h3"], verify ⇒ none, peer_unidi_stream_count ⇒ 3}, 5000),
    {ok, Conn} = quicer:async_accept_stream(Conn, []),
    timer:sleep(1000),
    ct:pal("~p", [process_info(self(), messages)]),
    ct:pal("~p", [process_info(self(), messages)]),
    ok.

This prints the following:

----------------------------------------------------
2023-08-30 17:15:57.986
{messages,[{quic,streams_available,#Ref<0.3047038216.2871918595.261224>,
                 #{bidi_streams => 100,unidi_streams => 3}}]}


----------------------------------------------------
2023-08-30 17:15:57.986
{messages,[{quic,streams_available,#Ref<0.3047038216.2871918595.261224>,
                 #{bidi_streams => 100,unidi_streams => 3}},
           {quic,new_stream,#Ref<0.3047038216.2871918592.249713>,
                 #{flags => 1,is_orphan => false}},
           {quic,new_stream,#Ref<0.3047038216.2871918592.249714>,
                 #{flags => 1,is_orphan => true}},
           {quic,new_stream,#Ref<0.3047038216.2871918592.249715>,
                 #{flags => 1,is_orphan => true}},
           {quic,<<0,4,0>>,
                 #Ref<0.3047038216.2871918592.249713>,
                 #{absolute_offset => 0,flags => 0,len => 3}}]}

First thing I notice is that the first ct:pal somehow doesn't print all the messages, I'm not sure why that is. But not important for this issue. If I do a receive I receive all these messages in the order they are printed with no issue.

The first problem is that the single async_accept_stream call resulted in 3 unidi streams getting accepted. I did not expect that at all.

The second problem is that I am not receiving the data from the two other unidi streams, even if I put more async_accept_stream calls.

@qzhuyan
Copy link
Collaborator

qzhuyan commented Aug 30, 2023

The first problem is that the single async_accept_stream call resulted in 3 unidi streams getting accepted. I did not expect that at all.

The test process is both connecion owner (who created the connection) and the stream owner of first stream (who called quicer:async_accept_stream once).

for 2nd and 3rd stream since there is no acceptor(quicer:async_accept_stream is called only once), quicer just pick the owner of the connecion as the 'fallback' owner with new_stream msg (flag: is_orphan => true)

The second problem is that I am not receiving the data from the two other unidi streams

since 2nd, and 3rd streams is_orphan=true, the active mode is set to false means receiving from message box is suspended.
@see https://github.com/qzhuyan/quic/blob/main/docs/messages_to_owner.md?plain=1#L279

, even if I put more async_accept_stream calls.

This could be timing issues. new streams come before you called async_accpet_stream.

@essen
Copy link
Author

essen commented Aug 30, 2023

I think that's what I'm seeing because when I try to call async_accept_stream for the second stream the new_stream message is already in the mailbox. Right now Cowboy opens the 3 streams immediately when connecting so if they're automatically picked up by something in quicer I don't have enough time to accept them properly.

@essen
Copy link
Author

essen commented Aug 30, 2023

The behavior I would expect is for quicer to not accept the stream at all instead of falling back and just wait for me to call async_accept_stream again.

@qzhuyan
Copy link
Collaborator

qzhuyan commented Aug 30, 2023

The behavior I would expect is for quicer to not accept the stream at all instead of falling back and just wait for me to call async_accept_stream again.

Quicer has to accept three streams when peer_unidi_stream_count ⇒ 3 is set. I don't see a method to push back except shutdown the stream.
Quicer could queue the new stream msg until there are acceptors and it adds complexity as later data messages will also be incoming, peer could also abort the stream due to timeout etc... that is why the current fallback solution is just to find an owner and bring this info to the selected owner and the tmp owner could react quickly hopefully. I need to think about more how to buffer the new_msg and its friends when there are no available stream acceptors.

By design, in the fallback case, if the process gets a new_stream message with is_orphan => true, it could
a) be the real owner, to recv data from msg box , set socket active
or
b) spawn new owner and handoff the ownership to the new process. see quicer:handoff_stream/3

@essen
Copy link
Author

essen commented Aug 31, 2023

Thanks for the explanations. So my confusion was around the accept_stream function.

If there has to be an accept_stream function, I would expect it to not reject the stream, buffer the information it receives for that stream (new_stream and data), and not increase the flow control for that stream (limiting the amount of data that has to be queued). Then when the application does accept the stream all the messages can be sent directly.

Otherwise, don't have/don't use the accept_stream function and let the connection process handle incoming streams. I think I will change the code to do that for now.

I suppose it could be nice in some cases to be able to have multiple processes accept streams and deal with them but if the connection owner also receives new streams (and spawn/handoff) that kind of defeats the purpose of having separate processes doing that.

On that note I have not seen a way to configure stream flow control. I only see connection flow control. Perhaps I missed something. For HTTP/3 I am required by the spec to allow at least 1024 bytes for these unidi streams.

@qzhuyan
Copy link
Collaborator

qzhuyan commented Aug 31, 2023

thanks for the feedback.
yes. One stream one process is recommand process model while work with QUIC multistreams within quicer. And we also have one process owns connecion. The fallback & handoff feature by design assume the connecion process is mostly idle after connecion setup and it should be utilized to act on connecion events 'new_stream' ... to handle new streams messages the connecion owner could spawn new process to become the owner of the new incoming stream.

I will make a feature switch to diable fallback and queuing streams in the connecion context and lets try that approch.

For HTTP/3 I am required by the spec to allow at least 1024 bytes for these unidi streams.

for stream flow control, look for stream_recv_window_default in connecion opts and set it to 1k..

@qzhuyan
Copy link
Collaborator

qzhuyan commented Aug 31, 2023

Oh forget to say, another reason for prefer fallback feature is that there will be a case that you have 3 process spawned for accepting new streams but peer only create one. In this case, two process are just idling there. Image you have many connections, we could not just ignore the fact that is 2x numOfConnecions processes are idling So the ondemad (create new process when new_stream msg is received) approach wins in this case.

@essen
Copy link
Author

essen commented Aug 31, 2023

Makes sense to do it like this when it's possible, although having a separate process per stream is not always a good idea when you have a lot of connections.

For HTTP it only makes sense to create a new process after the request has been validated. Immediately creating a new process is an immediate cost that we can delay until after parsing the request.

On top of that, QPACK would make things more difficult because the parsing of the request headers can be blocked until we receive the appropriate data on the QPACK unidi streams, and the encoding of the response may change depending on QPACK unidi as well, so using a separate process immediately would require exchanging a lot of Erlang messages to account for peer QPACK state changes. Similarly, the streams may update the local QPACK state. It's simply much easier to keep the QPACK states within a single process (the connection process) and let that process deal with decoding/encoding the headers.

Using a separate process for unidi would make little sense as well, for HTTP/3 only the bidi streams do any real work.

As a result I only have one process handling everything QUIC related. One process per connection with all streams handled by that same process.

Note that in my comments above I'm not saying it has to be a separate process that calls accept_stream, just that if there is an accept_stream function I wouldn't expect to receive the new_stream message anywhere that hasn't called the function. I am fine with either: the connection process receives all new_stream events and we don't need an accept_stream; or, the callers of accept_stream receive one new_stream per call at most (the connection process would have to call it for each stream it wants to accept, it doesn't have to be a separate process).

The issue with the fallback is that if I create a separate process that calls accept_stream, but it doesn't call accept_stream quick enough, the connection process will receive the new_stream. It is this race condition that I think quicer shouldn't allow. That said, I think the connection process receiving everything makes more sense and that's what I'll be doing now.

@qzhuyan
Copy link
Collaborator

qzhuyan commented Aug 31, 2023

thanks for calify it!

I agree. async accept stream is confusion and it should not be required to call since quicer send new_stream msg to the connecion owner anyway when there is no available acceptors.

@qzhuyan qzhuyan added enhancement New feature or request backlog labels Aug 31, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backlog enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants