Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider using smaller data frames for push #116

Closed
jmcc0nn3ll opened this issue Feb 16, 2016 · 4 comments
Closed

Consider using smaller data frames for push #116

jmcc0nn3ll opened this issue Feb 16, 2016 · 4 comments
Assignees

Comments

@jmcc0nn3ll
Copy link
Contributor

migrated from Bugzilla #409118
status ASSIGNED severity enhancement in component http2 for 9.3.x
Reported in version unspecified on platform All
Assigned to: Project Inbox

On 2013-05-27 04:45:17 -0400, Thomas Becker wrote:

Build Identifier:

It might make sense to push smaller data frames to be able to react better to browsers who reset push streams. Less data would have been send to the browser in case of RSTs.

Or make sure that data is only sent if the browser already accepted the push stream.

Reproducible: Always

On 2013-05-27 05:32:51 -0400, Simone Bordet wrote:

Thomas, I am not sure how smaller frames will help the client, especially in case of high latency. I guess we will just be slowing down pushes.

Waiting to push content until the browser accepts the stream is also subject to round trip delays, which spdy push attemps to eliminate.

Can you elaborate ?

On 2013-05-27 13:41:43 -0400, pat mcmanus wrote:

(In reply to comment # 1)

Thomas, I am not sure how smaller frames will help the client, especially in
case of high latency. I guess we will just be slowing down pushes.

Hi Simone - I made the suggestion for this bug to Thomas, so I'll elaborate.

Any prioritized muxed protocol needs small frame sizes because once you write the frame data length (in the header) to the wire you are committed to serializing the rest of the frame no matter how long it takes. During that serialization time you cannot react to a new higher priority sending opportunity (such as another stream, or cancellation of the stream your are sending). So its best to minimize the serialization time of each frame.

The latency of the connection doesn't matter to this strategy, because if at the end of the serialization the data stream that just wrote out the frame is still the highest priority stream then it can just immediately write another one - it doesn't need to wait for an ack or anything that might be driven by rtt. Basically, instead of sending 1 64KB frame, I'm suggesting sending 16 4KB frames in the common case. The only thing it costs you is an extra 120 bytes (15 extra 8 byte headers) - which is trivial overhead on 64KB of data. In return the connection stays responsive to higher priority events.

So this comment isn't just about push - imo frame sizes should really never go above 4KB.. below that overhead starts to be come an arugmenet but it's re rally hard to see the downside of 1 byte in 500 overhead.

Even pulled things get canceled all the time.. when you nav between pages and the resources aren't finished loading they get canceled - I want the connection to be able to send the new page data immediately instead of continuing to serialize the old streams.

but push is more likely to run into this problem because pushed streams are more likely to be canceled inherently. This is because they might be pushing things already in the client cache (unknowable to the server at push time), or they might be pushing things the client won't ever fetch due to policy (e.g. adblock plus, or greasemonkey rewriting your html, etc..) or maybe the client just has push disabled due to resource constraints.

Waiting to push content until the browser accepts the stream is also subject
to round trip delays, which spdy push attemps to eliminate.

definitely don't wait :)

hth

On 2015-04-01 18:06:07 -0400, Greg Wilkins wrote:

Consider this suggestion for http2

@sbordet sbordet self-assigned this Feb 17, 2016
@sbordet
Copy link
Contributor

sbordet commented Feb 17, 2016

@gregw I'm unsure about this issue.

Pat suggests to perform 16 4K writes rather than 1 64K write.
However, that is 16x more system calls to write().

Right now for large writes we generate all the frames that fit into the flow control window, and then we write them in a single write(), so it is indeed 1 64K write.

Perhaps we need a parameter that stops the generation: rather than generating 4 16K frames like we do now (because 16K is the default max frame size) and then write them all in a single write (so that the write is 64K), we stop the generation when reaching the value of this new parameter.
If the new parameter is valued at 4K, we generate just one 4K frame and then write it, and so forth until the whole thing is written or the flow control window is full.

Note that browsers do enlarge the flow control window to speed up downloads, so the flow control window may be way larger than 64K.

Should this parameter should be a function of the flow control window ? Rather than a bytes value like 4K, be a numeric value like 1/16 of the flow control window ?

@gregw
Copy link
Contributor

gregw commented Feb 18, 2016

I'd say that we have no where near enough data on this to decide. Currently the application can decide to push or not... if they decide to push, then we have to trust that they will make a good decision to push resources that are most likely wanted. If they are wanted, then we want to transfer them fast.

If they are not wanted, then best to just not push them rather than push them slowly.

I'd say do nothing on this until we have data that indicates we are wasting resources pushing streams that are early closed (perhaps the push filter should collect that info and learn from it?).

@sbordet
Copy link
Contributor

sbordet commented Feb 18, 2016

@gregw whether to write larger chunks or not applies not only to pushes, but also to large downloads.

Pat's worry is that the when the implementation writes, it has to finish that write no matter how long it takes.
If the write takes a long time to finish, then the system may react slowly to other writes.

However, the only case where the write takes a long time to finish is when it is TCP congested.
But in HTTP/2 I would say that TCP congestion is a rare case, that can only be triggered by flow control windows that are larger than the bandwidth-delay product for that connection.

And if the connection is TCP congested, other writes may not be written anyway.

In summary, I think writing smaller chunks is worse in that increases the overhead (few more bytes written, more system calls) and I don't see the benefit if not in rare cases.

@sbordet
Copy link
Contributor

sbordet commented Mar 7, 2016

In Jetty 9.4.x stream interleaving has been improved (#360) so that now the unit of interleaving is the frame size. This takes care of making the prioritization of the frames fair with respect to writes.

Closing the issue since it's basically undecided, and we will keep improving HTTP/2 anyway in future.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants