Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HTTP/3 with strict priorities #2700

Closed
wants to merge 15 commits into from
Closed

HTTP/3 with strict priorities #2700

wants to merge 15 commits into from

Conversation

@ianswett
Copy link
Contributor

ianswett commented May 15, 2019

Replaces streams depending upon streams with a one byte strict priority value. As much data as possible is sent on Streams with a higher priority before sending data from lower priority streams.
Retains weights and placeholders in order to provide functionality equivalent to HTTP/2

Includes the sequential and interleaved delivery idea from Patrick Meenan's recent proposal(https://github.com/pmeenan/http3-prioritization-proposal/blob/master/README.md), when multiple streams share the same priority value.

The goal is to fix #2502 and similar issues, add a single operation(change of strict priority) equivalent to HTTP/2 exclusive reprioritization, which was removed in #2075, and lower the complexity of a compliant priority implementation by allowing a server to specify 0 placeholders, in which case this simplifies to priorities and weights.

Fixes #2502 by removing the ability to depend upon a stream(and therefore an unopened or already closed stream).

ianswett added 6 commits May 15, 2019
PRIORITY frames for that request MUST be sent on the control stream. A
PRIORITY frame received after other frames on a request stream MUST be treated
as a connection error of type HTTP_UNEXPECTED_FRAME.
PRIORITY frames for that request MUST be sent on the request stream.

This comment has been minimized.

Copy link
@LPardue

LPardue May 15, 2019

Member

The difficulty here is that you'll need to keep the request stream open (from the client-send side) for as long you want to reprioritize the request. This risks conflicting with the current text in section 5.1

An HTTP request/response exchange fully consumes a bidirectional QUIC
stream. After sending a request, a client MUST close the stream for
sending.

That is not insurmountable, you just might need some extra text in the draft to deal with the ripple effects of the changes.

This comment has been minimized.

Copy link
@ianswett

ianswett May 17, 2019

Author Contributor

Agreed, it's possible I should separate that change from this one. I know some WG members are fairly unhappy that they can't process PRIORITY frames immediately in cases when the stream hasn't arrived yet, but it's not 100% clear to me that's too painful in practice.

This comment has been minimized.

Copy link
@ianswett

ianswett May 17, 2019

Author Contributor

I've now reverted this portion of the change, thanks for the suggestion.

Copy link
Member

LPardue left a comment

This seems like a strong alternative candidate for modifying prioritisation without massive deviation from the H2 spec

draft-ietf-quic-http.md Outdated Show resolved Hide resolved
{: #element-dependency-types title="Element Dependency Types"}

Note that unlike in {{!RFC7540}}, the root of the tree cannot be referenced
using a Stream ID of 0, as in QUIC stream 0 carries a valid HTTP request. The
root of the tree cannot be reprioritized.
using 0, so the root of the tree cannot be reprioritized.

The PRIORITY frame can express relationships which might not be permitted based
on the stream on which it is sent or its position in the stream. These
situations MUST be treated as a connection error of type HTTP_MALFORMED_FRAME.
The following situations are examples of invalid PRIORITY frames:

- A PRIORITY frame sent on a request stream with the Prioritized Element Type

This comment has been minimized.

Copy link
@LPardue

LPardue May 15, 2019

Member

Since the list is one bullet point long with this change, you could probably roll this back into prose

This comment has been minimized.

Copy link
@ianswett

ianswett May 17, 2019

Author Contributor

I added back another one, but it's still a short list. I believe one more will end up being added, so keeping the structure for now to minimize diffs.

Copy link
Contributor

MikeBishop left a comment

Overall, I like the design if the working group has the will to restructure this much. I think you're burying the lede by not discussing what happens when a placeholder has both weighted and unweighted children. It's at the very bottom of the prioritization section, and it's a major departure from HTTP/2 that should probably be at the top.

I find the idea of simultaneously having a priority and a weight overly complicated. As an alternative, perhaps this could be a property of the placeholder -- children are served either in sequence or round-robin by weight, but you can't mix. (And if you need to mix, just put a placeholder of the opposite type as a child.)

- Requests, identified by the ID of the request stream
5.3, but replaces streams depending upon streams with strict priorities and
placeholders. In the HTTP/3 priority scheme, a stream can be given a strict
priority or can be designated as dependent upon a placeholder or the root.

This comment has been minimized.

Copy link
@MikeBishop

MikeBishop May 15, 2019

Contributor

...or weights, apparently?

HTTP/2 are explicitly encoded with strict prioritization. This simplifies
priority tree management, eliminates potential new race conditions introduced
by HTTP/3's multiple streams, and improves framing efficiency with more than
64 requests.

This comment has been minimized.

Copy link
@MikeBishop

MikeBishop May 15, 2019

Contributor

I'm not sure we need to be explicit about the 64 requests part. Maybe just say "for long-lived connections"?

This comment has been minimized.

Copy link
@dtikhonov

dtikhonov May 15, 2019

Contributor

The can sentence can be removed: it's an editorial.

This comment has been minimized.

Copy link
@ianswett

ianswett May 17, 2019

Author Contributor

Removed.

of 16 and a default dependency. Requests and placeholders are dependent on the
root of the priority tree; pushes are dependent on the client request on which
the PUSH_PROMISE frame was sent.
of 16, priority of 16, and a default dependency. Requests and placeholders are

This comment has been minimized.

Copy link
@MikeBishop

MikeBishop May 15, 2019

Contributor

I'm surprised that they have an initial weight, when weight can be omitted. Shouldn't the default be to have null weight?

This comment has been minimized.

Copy link
@ianswett

ianswett May 17, 2019

Author Contributor

Agreed, I'll change to 0 weight, which is equivalent to omitting a weight.

the PUSH_PROMISE frame was sent.
of 16, priority of 16, and a default dependency. Requests and placeholders are
dependent on the root of the priority tree; pushes are dependent on the client
request on which the PUSH_PROMISE frame was sent.

This comment has been minimized.

Copy link
@MikeBishop

MikeBishop May 15, 2019

Contributor

...except that your PR removes the ability to depend on streams, so this means push streams start out in an impossible-to-express state. Not necessarily a problem (we're doing it with the orphan placeholder), but make sure it's deliberate.

This comment has been minimized.

Copy link
@ianswett

ianswett May 17, 2019

Author Contributor

Good point, will fix this.

specified, then elements are delivered sequentially and all available data
from one element is sent before any data from the next. If a given priority
level has some elements with weights and some without, the two groups share
the available bandwidth equally.

This comment has been minimized.

Copy link
@MikeBishop

MikeBishop May 15, 2019

Contributor

How weights and explicit priorities interact probably needs to come at the top of the section, since it's a pretty substantial departure from HTTP/2.

This comment has been minimized.

Copy link
@kazuho

kazuho May 16, 2019

Member

Would it make sense to change the requirement of the last sentence to something like: "If a given priority level has some elements with weights and some without, those without weights are sent before any of those with weights"?

I suggest this because it would be a simplification. Current text requiring the bandwidth to be evenly split between the two groups essentially suggests implementations to create one more WRR instance - I prefer avoiding that.

I think such a change is fine because I do not think most of the client would use the prioritization scheme in such a way, and because clients that indeed want in fact to split the bandwidth can use placeholders to accommodate that.

draft-ietf-quic-http.md Outdated Show resolved Hide resolved
@kazuho

This comment has been minimized.

Copy link
Member

kazuho commented May 15, 2019

Clarification question: Does the proposed approach allow specifying the optimal ordering that some of us seemed to agree in #2502?

The optimal ordering being:

  1. JavaScript and CSS files are sent at first, one by one
  2. after that, images are sent with the bandwidth distributed evenly among them

It seems to me that for 2 to happen strictly after 1, a placeholder needs to be allowed to belong to a request stream. However the text in the PR does not seem to allow that (maybe I'm missing something).

@MikeBishop

This comment has been minimized.

Copy link
Contributor

MikeBishop commented May 16, 2019

If I've understood the scheme properly, you would achieve that by:

  • No weight on the JS/CSS (i.e. deliver each resource all at once)
  • All JS/CSS dependent on the same parent placeholder; specified priority doesn't necessarily matter unless you want to subdivide
  • Another placeholder dependent on the parent with a lower priority than the JS/CSS files
  • All images dependent on the second placeholder with equal weights
@kazuho

This comment has been minimized.

Copy link
Member

kazuho commented May 16, 2019

@MikeBishop Thank you for the answer. With your answer in my mind I've reread the PR and maybe it possible to achieve that without using any placeholders? Maybe like:

root -+- JS1 (priority=256,weight=N/A)
      +- JS2 (priority=256,weight=N/A)
      +- CSS1 (priority=256,weight=N/A)
      +- CSS2 (priority=256,weight=N/A)
      +- HTML (priority=32,weight=N/A)
      +- image1 (priority=16,weight=16)
      +- image2 (priority=16,weight=16)
      +- image3 (priority=16,weight=16)
      +- image4 (priority=16,weight=16)
: An optional unsigned 8-bit integer representing a priority weight for the
prioritized element (see {{!RFC7540}}, Section 5.3). Add one to the value
to obtain a weight between 1 and 256. When absent, indicates the resource
should be delivered all at once or not at all.

This comment has been minimized.

Copy link
@tatsuhiro-t

tatsuhiro-t May 16, 2019

Contributor

Server might not have all data in place (e.g., streaming via proxy). To achieve "delivered all at once or not at all", it seems to me that proxy has to buffer all data, which is impossible if data is unlimited or too large.

This comment has been minimized.

Copy link
@rmarx

rmarx May 16, 2019

Contributor

Maybe a similar indication as in the HTTP/2 RFC makes sense:

"When absent, indicates the resource should be allocated the full available resources from its parent, as long as it is able to make progress."

This comment has been minimized.

Copy link
@ianswett

ianswett May 17, 2019

Author Contributor

Thanks @rmarx suggestion taken.

@rmarx

This comment has been minimized.

Copy link
Contributor

rmarx commented May 16, 2019

Thank you for beating me to this Ian :)

I am a bit unsure about this approach. While it seems to allow @pmeenan's setup, as I mentioned at the bottom of this comment #2502 (comment), I'm not sure that is really necessary.

With the existing setup (allowing dependencies on a request stream), we seem to only lose the ability to re-prioritize elements within the same "bucket". E.g., if you're sending a Font file, you cannot have a CSS file interrupt it, if they are both under the same Placeholder (HIGHEST in my example there). I am unsure if this is such a bad thing, since both the Font and CSS are similarly important and both need to be downloaded in full to be applied (iiuc). If you still wanted interruptability, you could add another placeholder "bucket" (e.g., FONTS) to go more fine-grained.

Either way, the current proposed text in this PR is a bit ambiguous about what happens if you receive a new entry with a higher priority than the current highest-priority element: do you interrupt the current response for the new one? (e.g., you had a FONT of priority 50, it's about 50% done sending, now you get a sibling CSS of priority 60: will that interrupt the FONT? I would assume so). That goes against he "be delivered all at once or not at all" mantra.

One of the benefits of Pat's scheme imo is that it is easier to implement and doesn't require keeping track of open/closed streams and separate placeholders etc. By integrating this into the tree-based setup here, we are losing that simplicity (also remarked by @MikeBishop earlier here). I also agree with Mike's remark that this type of thing might make more sense as a property of the Placeholder, rather than each individual stream/node.

I am probably missing something here, but it would be good to get some more clarification on what use case we are trying to support exactly (and imo we should wait for some input from @pmeenan (mainly on the importance of reprioritizing within a sequential list) before proceeding much further).

As a side note: I assume this would be complementary to the Orphan Placeholder in #2690? Because #2502 discusses 2 (imo) separate concepts, and these 2 PRs treat them individually in my perspective.

@rmarx rmarx mentioned this pull request May 16, 2019
@pmeenan

This comment has been minimized.

Copy link

pmeenan commented May 16, 2019

I'll read through the proposal in a minute but to answer the "interrupt delivery" question, the expectation across the board is yes, lower priority streams will get interrupted by higher priority streams as soon as data for the higher priority stream is available (I'd actually argue that it's critical to be able to interrupt).

@kazuho

This comment has been minimized.

Copy link
Member

kazuho commented May 17, 2019

@rmarx

While it seems to allow @pmeenan's setup, as I mentioned at the bottom of this comment #2502 (comment), I'm not sure that is really necessary.

I think that is the question.

While I think this would be a simplification for HTTP/3, it would be an additional complexity for HTTP/2+HTTP/3 as a whole. At the moment, prioritization scheme of HTTP/3 allows implementations to reuse the logic of HTTP/2 prioritization (only what's visible on the wire is different).

@mikkelfj

This comment has been minimized.

Copy link
Contributor

mikkelfj commented May 17, 2019

If strict interruption is not critical, it would simplify the QUIC API if weights can be applied and updated to active QUIC streams such that the QUIC scheduler can use weighted round robin to decide what to send next without tracking which streams are currently blocked. It would also allow very small streams to complete even if they have low priority, for example small SVC icons. On the other hand, HTTP/3 probably shouldn't start new streams of lower priority while there are higher priority streams still transmitting or ready to be transmitted.

@rmarx

This comment has been minimized.

Copy link
Contributor

rmarx commented May 17, 2019

To clarify, I am not saying strict interruption is not critical. I am saying strict interruption inside a single, sequential queue is not critical.

As in: CSS/JS should still definitely interrupt images and the small SVG icon should not make progress until higher priority resources are fully fetched.

The main thing we lost imo when getting rid of exclusive dependencies in practice, was the ability to fully re-prioritize within a sequential list, the way that Chrome is doing its H2 setup (without placeholder, all nodes are daisy-chained). In #2502 (comment) I did a thought experiment of how we might do Chrome's behaviour with the current text and it comes out very similar. The only thing missing is the ability to interrupt within a single priority "bucket" (represented by a placeholder). My point is that I doubt this is necessary, as I don't feel you want to interrupt e.g., a Font file for a CSS file. And even if you would want that, you could add an extra priority bucket. I am however not very sure this is the case, so that's what I wanted @pmeenan's input on.

If that specific ability is not very critical, it is my feeling we do not need this change, as it would add much complexity and departure from HTTP/2, while gaining us little. I would be more in favor of providing an extension that just implements Pat's original proposal outside of the main setup (so no tree, nodes, placeholders, etc.). I am perfectly willing to help write such a document btw, @pmeenan ;)

It is possible that Pat's proposal (and by consequence, this PR) allows additional options that weren't possible in H2 that I haven't considered. That would be a good argument to push forward with this, but I haven't seen a concrete example of those yet.

@rmarx

This comment has been minimized.

Copy link
Contributor

rmarx commented May 17, 2019

Maybe it's a good idea if I prepare some slides with examples of trees in the different proposals for Thursday, to help guide the discussion?

Co-Authored-By: Mike Bishop <mbishop@evequefou.be>
@dtikhonov

This comment has been minimized.

Copy link
Contributor

dtikhonov commented May 17, 2019

@kazuho writes:

While I think this would be a simplification for HTTP/3, it would be an additional complexity for HTTP/2+HTTP/3 as a whole. At the moment, prioritization scheme of HTTP/3 allows implementations to reuse the logic of HTTP/2 prioritization (only what's visible on the wire is different).

This is an important aspect of this discussion. Extant HTTP/2 stacks can use the current (ID-20) version of HTTP/3 prioritization with few changes. Radical departures from it will inevitably result in more work for them -- not just in programming effort, but also to research how to use the new prioritization scheme effectively.

To be adopted, a new prioritization model should be substantially and demonstrably better than the current one. The proponents of new schemes should catalogue use cases (as @rmarx also mentions above) for comparison with the current scheme.

Recent reports (both in clients and servers) indicate that HTTP/2 prioritization is already being leveraged to produce substantial performance benefits. It would be nice if these solutions could be used in HTTP/3 as well.

@mikkelfj

This comment has been minimized.

Copy link
Contributor

mikkelfj commented May 17, 2019

Code reuse from an old spec is IMO not a good basis on which to make design decisions for a new spec, while proven concepts could be.

@LPardue

This comment has been minimized.

Copy link
Member

LPardue commented May 17, 2019

This is an important aspect of this discussion. Extant HTTP/2 stacks can use the current (ID-20) version of HTTP/3 prioritization with few changes. Radical departures from it will inevitably result in more work for them -- not just in programming effort, but also to research how to use the new prioritization scheme effectively.

Which stacks do you have in mind?

A survey of user agents indicates that the priority trees were designed once and not iterated on. Developers or users have little control to modify behaviour here.

On the server side, the priority tree is an input into the logic. If the servers' processing is tightly coupled to the input scheme, that would suggest difficulty in trying out alternatives, which is maybe an indicator to the feasible success of prioritisation extensions.

As Cloudflare recently blogged about, our extant HTTP/2 stack has many problems in trying to deliver on the intended design expectations. However, a broad set of those problems had nothing to do with the priority tree itself.

Yes, we don't want to throw the baby out with the bathwater. However, we should also be aware that there's going to be a ton of work to determine how HTTP/3 performs in real-world usage scenarios. Predicting the effects of HTTP/3's design on that work is hard.

@dtikhonov

This comment has been minimized.

Copy link
Contributor

dtikhonov commented May 17, 2019

@mikkelfj writes:

Code reuse from an old spec is IMO not a good basis on which to make design decisions for a new spec, while proven concepts could be.

I agree. My point is mostly that one technology already works.

@lucasp writes:

Which stacks do you have in mind?

I mean mostly browsers. It is they that have to figure out how to use priorities effectively. The servers just have to follow what the client says.

Yes, we don't want to throw the baby out with the bathwater. However, we should also be aware that there's going to be a ton of work to determine how HTTP/3 performs in real-world usage scenarios. Predicting the effects of HTTP/3's design on that work is hard.

The logical next step is to make prioritization configurable?

ianswett added 4 commits May 17, 2019
Missed revert from a previous commit.
ianswett added 2 commits May 17, 2019
Copy link
Contributor Author

ianswett left a comment

Thanks for all the excellent suggestions.

PRIORITY frames for that request MUST be sent on the control stream. A
PRIORITY frame received after other frames on a request stream MUST be treated
as a connection error of type HTTP_UNEXPECTED_FRAME.
PRIORITY frames for that request MUST be sent on the request stream.

This comment has been minimized.

Copy link
@ianswett

ianswett May 17, 2019

Author Contributor

Agreed, it's possible I should separate that change from this one. I know some WG members are fairly unhappy that they can't process PRIORITY frames immediately in cases when the stream hasn't arrived yet, but it's not 100% clear to me that's too painful in practice.

PRIORITY frames for that request MUST be sent on the control stream. A
PRIORITY frame received after other frames on a request stream MUST be treated
as a connection error of type HTTP_UNEXPECTED_FRAME.
PRIORITY frames for that request MUST be sent on the request stream.

This comment has been minimized.

Copy link
@ianswett

ianswett May 17, 2019

Author Contributor

I've now reverted this portion of the change, thanks for the suggestion.

: An optional unsigned 8-bit integer representing a priority weight for the
prioritized element (see {{!RFC7540}}, Section 5.3). Add one to the value
to obtain a weight between 1 and 256. When absent, indicates the resource
should be delivered all at once or not at all.

This comment has been minimized.

Copy link
@ianswett

ianswett May 17, 2019

Author Contributor

Thanks @rmarx suggestion taken.

{: #element-dependency-types title="Element Dependency Types"}

Note that unlike in {{!RFC7540}}, the root of the tree cannot be referenced
using a Stream ID of 0, as in QUIC stream 0 carries a valid HTTP request. The
root of the tree cannot be reprioritized.
using 0, so the root of the tree cannot be reprioritized.

The PRIORITY frame can express relationships which might not be permitted based
on the stream on which it is sent or its position in the stream. These
situations MUST be treated as a connection error of type HTTP_MALFORMED_FRAME.
The following situations are examples of invalid PRIORITY frames:

- A PRIORITY frame sent on a request stream with the Prioritized Element Type

This comment has been minimized.

Copy link
@ianswett

ianswett May 17, 2019

Author Contributor

I added back another one, but it's still a short list. I believe one more will end up being added, so keeping the structure for now to minimize diffs.

HTTP/2 are explicitly encoded with strict prioritization. This simplifies
priority tree management, eliminates potential new race conditions introduced
by HTTP/3's multiple streams, and improves framing efficiency with more than
64 requests.

This comment has been minimized.

Copy link
@ianswett

ianswett May 17, 2019

Author Contributor

Removed.

of 16 and a default dependency. Requests and placeholders are dependent on the
root of the priority tree; pushes are dependent on the client request on which
the PUSH_PROMISE frame was sent.
of 16, priority of 16, and a default dependency. Requests and placeholders are

This comment has been minimized.

Copy link
@ianswett

ianswett May 17, 2019

Author Contributor

Agreed, I'll change to 0 weight, which is equivalent to omitting a weight.

the PUSH_PROMISE frame was sent.
of 16, priority of 16, and a default dependency. Requests and placeholders are
dependent on the root of the priority tree; pushes are dependent on the client
request on which the PUSH_PROMISE frame was sent.

This comment has been minimized.

Copy link
@ianswett

ianswett May 17, 2019

Author Contributor

Good point, will fix this.

@LPardue

This comment has been minimized.

Copy link
Member

LPardue commented May 17, 2019

I mean mostly browsers. It is they that have to figure out how to use priorities effectively. The servers just have to follow what the client says.

I hypothesise the challenge there will be how to manage prioritisation of different resources on different hosts being fetched using different protocols.

@ianswett

This comment has been minimized.

Copy link
Contributor Author

ianswett commented May 17, 2019

Thanks for the follow-up discussion. I think some slides for the interim would be helpful, and I'm happy to work on them with @rmarx or anyone else who's there and wants to contribute.

On the matter of re-using HTTP/2 code, I think we've already departed enough that major changes are going to be necessary for many implementations. For example, I believe Chrome would be best off moving to using a linked list of placeholders for it's priority buckets as @rmarx described.

In terms of complexity, this can be simplified some and I believe still represent the full functionality of HTTP/2. In particular, strict priorities in placeholders or placeholders depending upon placeholders could be removed, since they're largely duplicative.

On a personal note, I'd prefer to adopt something closer to Patrick Meenan's proposal, but on that thread there was the theoretical use case of a proxy that wants to fairly share connectivity to an ORIGIN. It doesn't appear anyone's using that, so I'd prefer to jettison it, but I'm trying to find a proposal we can agree on.

One thing to appreciate is that HTTP/2 is head-of-line blocking, but HTTP/3 is not, so waiting two RTTs before pruning may result in much more frequent issues in HTTP/3. I'd recommend 3 * PTO or some much longer value. I see changing HTTP/3's priority scheme similar to changing from HPACK to QPACK. We need to acknowledge that removing HoL blocking is going to cause issues(that's why we removed exclusive prioritization) and make adjustments. We've already made two large-ish changes, but we still don't have something with completely deterministic behavior. In other parts of the transport(and QPACK), we've balked at pathological edge cases like that proposed in #2690, so it seems inconsistent to say it's OK for this one case.

@ianswett

This comment has been minimized.

Copy link
Contributor Author

ianswett commented May 17, 2019

@dtikhonov "My point is mostly that one technology already works." - I'll specifically disagree with this point. HTTP/2 has been an RFC for 4 years now and according to Patrick Meenan's surveys, a huge fraction of servers don't support HTTP/2 priorities and clients have somewhat odd usage patterns, with Chrome specifically trying to re-create strict priorities and Edge ignoring them entirely.

The complexity has also kept HTTP/2 priorities out of the web APIs, because it's clear the full functionality is far too complex for web developers and it's hard to agree upon what should be surfaced.

The target should be something the majority of implementations will implement, even if it's not their ideal solution.

@rmarx rmarx mentioned this pull request May 18, 2019
because it is the default weight or it was not specified, then elements are
delivered sequentially and all available data from one element is sent before
any data from the next. If a given priority level has some elements with
weights and some without, the two groups share the available bandwidth equally.

This comment has been minimized.

Copy link
@igorlord

igorlord May 18, 2019

Contributor

You can have 10 elements with weights at some priority level, sharing bandwidth. When you add a single element to that priority level without a weight, the bandwidth for the 10 elements is halved. This does not seem right.

Suggested change
weights and some without, the two groups share the available bandwidth equally.
weights and some without, the two groups share the available bandwidth proportionally to the number of elements in each group.

This comment has been minimized.

Copy link
@mikkelfj

mikkelfj May 18, 2019

Contributor

I suggest looking at rmarx proposal.

This comment has been minimized.

Copy link
@ianswett

ianswett May 20, 2019

Author Contributor

Good point @igorlord This was reflecting what Patrick Meenan wrote up, but I agree that it's not clear it's the right option.

Personally, I would prefer removing weights from streams and only having one bit to declare that this resource is incrementally useful.

@rmarx

This comment has been minimized.

Copy link
Contributor

rmarx commented May 20, 2019

Turns out I had misunderstood the intent of this PR with regards to @pmeenan's setup and how to recreate the following image. Turns out it's really quite brilliant!

pmeenan's approach

For others who might not have figured it out:

@ianswett proposes to use odd numbered priority values (e.g., 127, 125, ...) to represent concurrency level 3, and even numbered priority values (e.g., 126, 124, ...) to represent concurrency levels 2 and 1.
For example:
127 = concurrency 3
126, no weight = concurrency 2
126, weight = concurrency 1

This allows the whole thing to be represented without a single placeholder (which I didn't understand before now)! That is immensely powerful, while also allowing normal placeholders.

It would also allow resolving #2690 in the same way as proposed in #2723 : just make priority 0, no weight the default for new nodes!

It still doesn't allow in-sequential-queue re-prioritization, but as explained in #2723 that shouldn't be a problem / can be worked around.

We're not sure yet about streams-upon-streams dependencies and how this mixes with the placeholder approach etc. but it seems this PR could be the one we were looking for. We will iterate on this and create some example schema's.

@igorlord

This comment has been minimized.

Copy link
Contributor

igorlord commented May 20, 2019

@rmarx, I would not say that odd priority values represent odd concurrency level 3. You could implement @pmeenan's "concurrency level 3" that way. But the idea is that you do not need "concurrency level 3" as a special case to think about.

There are just two criteria:

  1. Priorities: Elements of the higher priority 100% preempt elements of the lower priority.

  2. Concurrency: Elements of the same priority share bandwidth. Serial elements need to be downloaded to completion one at a time, while concurrent elements need to be downloaded concurrently. There is still a question of how serial and concurrent elements share bandwidth. There is a proposal that concurrent elements get 50% of the bandwidth as a group. The alternative is that concurrent elements group gets a percentage proportional to the number of concurrent elements in at that priority level. Both of these alternatives seem "hacky" as they possible do not give the client sufficient control for bandwidth sharing. I have some proposal and will share it on the list (the PR comment does not seem like the right place for it).

@tatsuhiro-t

This comment has been minimized.

Copy link
Contributor

tatsuhiro-t commented May 21, 2019

If we give up sequential stuff, it becomes quite simple:

(quoting from the previous comment)

  1. Priorities: Elements of the higher priority 100% preempt elements of the lower priority.
  2. Concurrency: Elements of the same priority share bandwidth.

And there is no 0 weight or special weight. Nodes share bandwith according to their weight if they are under the same priority.

@ianswett

This comment has been minimized.

Copy link
Contributor Author

ianswett commented May 21, 2019

This discussion is a mix of editorial(ie: what does this scheme actually accomplish/specify?) and non-editorial(ie: Why do I think this is good/bad) commentary.

I apologize for setting a poor example, but please do try to direct non-editorial comments to the list or issue and I'll do the same. Given @dtikhonov already started an email thread about this PR, I think the list makes the most sense.

@rmarx

This comment has been minimized.

Copy link
Contributor

rmarx commented May 23, 2019

Just to come back to my comment from before:

It would also allow resolving #2690 in the same way as proposed in #2723 : just make priority 0, no weight the default for new nodes!

This is not entirely correct sadly...
Main example is Safari/Edge behaviour: all nodes dependent on the root, setting explicit weights (but no priorities!).

If un-prioritized/default nodes are added with priority 0 + weight 0, they would share the bandwidth 50/50 with the weighted nodes, making this default even worse than the HTTP/2 of adding the nodes with weight 16...

Potential solution is to add them with priority=0,weight=1, but then they will still get some part of the bandwidth (how much depends on the weights of other things) and we are again back to round-robin (which we don't want, we want FIFO in the default case).

So, other solutions:

  • Nodes without priorities and without weights (default): priority=0, weight=0. Nodes without priorities but WITH weights (explicit behaviour): priority=1,weight=incoming
  • Special priority level -1 ("implicit orphan placeholder")
  • Explicit orphan placeholder (#2690) (BUT then we can't implement this setup without any placeholders at all, which is possible at the moment)
@ianswett

This comment has been minimized.

Copy link
Contributor Author

ianswett commented Jun 10, 2019

I'm closing this, since the decision was not to merge it. I appreciate all the discussion and I'll continue this conversation on the list.

@ianswett ianswett closed this Jun 10, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
10 participants
You can’t perform that action at this time.