Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimise sending data in http3 code #261

Open
ddragana opened this issue Oct 18, 2019 · 7 comments
Labels

Comments

@ddragana
Copy link
Collaborator

@ddragana ddragana commented Oct 18, 2019

If a stream is block but still have data we will try to write on every iteration. We should handle this better and wait for stream to be unblocked.

@ddragana ddragana added the task-small label Oct 18, 2019
@hawkinsw

This comment has been minimized.

Copy link
Collaborator

@hawkinsw hawkinsw commented Nov 5, 2019

I would like to tackle this and wanted to make sure that I have context correct. You are referring to the condition where a stream has data to send but is blocked because it has exhausted its tokens?

   If a sender runs out of flow control credit, it will be unable to
   send new data and is considered blocked.  A sender SHOULD send a
   STREAM_DATA_BLOCKED or DATA_BLOCKED frame to indicate it has data to
   write but is blocked by flow control limits.  These frames are
   expected to be sent infrequently in common cases, but they are
   considered useful for debugging and monitoring purposes.

from https://tools.ietf.org/html/draft-ietf-quic-transport-22#section-4.1

Thanks for all the great work you're doing on neqo. I hope that I'll be able to help.

@martinthomson

This comment has been minimized.

Copy link
Member

@martinthomson martinthomson commented Nov 5, 2019

My understanding is that Http3Transaction::send is invoked every time the stack is invoked. That object doesn't track whether the stream is blocked or not, but it could track that and return false from Http3Transaction::has_data_to_send so that it wouldn't be called every time.

I'm not sure how much of an advantage this is unless we see a lot of blocking.

@ddragana

This comment has been minimized.

Copy link
Collaborator Author

@ddragana ddragana commented Nov 5, 2019

I think I wanted to change behavior so that a stream is not add to streams_have_data_to_send if it is blocked and only added if it gets unblocked. This will not make a big difference. Also I wanted to change how we consume streams_have_data_to_send. Currently we do
let to_send = mem::replace(&mut self.streams_have_data_to_send, BTreeSet::new());
we should actually take single element until it is empty, because a stream can be added again to let self.streams_have_data_to_send during iteration.

@hawkinsw

This comment has been minimized.

Copy link
Collaborator

@hawkinsw hawkinsw commented Nov 5, 2019

I completely understand that distinction and that discussion. But, I still wanted to confirm that I am looking in the right part of the spec to understand the context of how a stream would get into the blocked state.

@martinthomson

This comment has been minimized.

Copy link
Member

@martinthomson martinthomson commented Nov 5, 2019

Yep, you got it right. A peer provides MAX_STREAM_DATA and MAX_DATA. Streams are blocked as long as data sent runs to that limit and those values haven't increased.

@hawkinsw

This comment has been minimized.

Copy link
Collaborator

@hawkinsw hawkinsw commented Nov 7, 2019

Thanks for confirming. I am really excited about working on this! It's a free-time thing for me, but I will do my best to make progress as quickly as possible!

@hawkinsw

This comment has been minimized.

Copy link
Collaborator

@hawkinsw hawkinsw commented Dec 8, 2019

I think I wanted to change behavior so that a stream is not add to streams_have_data_to_send if it is blocked and only added if it gets unblocked. This will not make a big difference. Also I wanted to change how we consume streams_have_data_to_send. Currently we do
let to_send = mem::replace(&mut self.streams_have_data_to_send, BTreeSet::new());
we should actually take single element until it is empty, because a stream can be added again to let self.streams_have_data_to_send during iteration.

I have a question! :-)

An http3 server connection's bidi data stream will be marked with has data to send when a response is set. While the server is sending that response, it could run out of credits. If/when the streams becomes blocked because it has run out of credits, you are saying that we should remove the stream's has data to send marking and only remark it when the stream becomes writable again?

Just wanted to make sure that I properly understand the goal :-)

hawkinsw added a commit to hawkinsw/neqo that referenced this issue Dec 8, 2019
In light of use cases like mozilla#261, it seems like a good idea
to create/expose a DataBlocked event (a parallel of a DataWritable
event) when a send stream becomes blocked because of a lack
of credits.
hawkinsw added a commit to hawkinsw/neqo that referenced this issue Dec 9, 2019
In light of use cases like mozilla#261, it seems like a good idea
to create/expose a DataBlocked event (a parallel of a DataWritable
event) when a send stream becomes blocked because of a lack
of credits.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants
You can’t perform that action at this time.