-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider using smaller data frames for push #116
Comments
@gregw I'm unsure about this issue. Pat suggests to perform 16 4K writes rather than 1 64K write. Right now for large writes we generate all the frames that fit into the flow control window, and then we write them in a single Perhaps we need a parameter that stops the generation: rather than generating 4 16K frames like we do now (because 16K is the default max frame size) and then write them all in a single write (so that the write is 64K), we stop the generation when reaching the value of this new parameter. Note that browsers do enlarge the flow control window to speed up downloads, so the flow control window may be way larger than 64K. Should this parameter should be a function of the flow control window ? Rather than a bytes value like 4K, be a numeric value like 1/16 of the flow control window ? |
I'd say that we have no where near enough data on this to decide. Currently the application can decide to push or not... if they decide to push, then we have to trust that they will make a good decision to push resources that are most likely wanted. If they are wanted, then we want to transfer them fast. If they are not wanted, then best to just not push them rather than push them slowly. I'd say do nothing on this until we have data that indicates we are wasting resources pushing streams that are early closed (perhaps the push filter should collect that info and learn from it?). |
@gregw whether to write larger chunks or not applies not only to pushes, but also to large downloads. Pat's worry is that the when the implementation writes, it has to finish that write no matter how long it takes. However, the only case where the write takes a long time to finish is when it is TCP congested. And if the connection is TCP congested, other writes may not be written anyway. In summary, I think writing smaller chunks is worse in that increases the overhead (few more bytes written, more system calls) and I don't see the benefit if not in rare cases. |
In Jetty 9.4.x stream interleaving has been improved (#360) so that now the unit of interleaving is the frame size. This takes care of making the prioritization of the frames fair with respect to writes. Closing the issue since it's basically undecided, and we will keep improving HTTP/2 anyway in future. |
migrated from Bugzilla #409118
status ASSIGNED severity enhancement in component http2 for 9.3.x
Reported in version unspecified on platform All
Assigned to: Project Inbox
On 2013-05-27 04:45:17 -0400, Thomas Becker wrote:
On 2013-05-27 05:32:51 -0400, Simone Bordet wrote:
On 2013-05-27 13:41:43 -0400, pat mcmanus wrote:
On 2015-04-01 18:06:07 -0400, Greg Wilkins wrote:
The text was updated successfully, but these errors were encountered: