Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prioritization schemes needs flexibilty #110

Closed
suhasHere opened this issue Mar 30, 2023 · 17 comments
Closed

Prioritization schemes needs flexibilty #110

suhasHere opened this issue Mar 30, 2023 · 17 comments
Assignees

Comments

@suhasHere
Copy link
Collaborator

suhasHere commented Mar 30, 2023

Current spec requires, when tracks are part of the bundle, they would some sort of round robin ordering between them and send order based ordering inside the bundle.

This brings 3 things

  • Idea of pooling since bundle == WT
  • Round robin as the natural ordering - May be
  • Send order as the forward preference

There are alternate possible proposals that would allow object priorities and not tied to bundle or the WT session but operate across all tracks within a QUIC connections.

@afrind
Copy link
Collaborator

afrind commented Mar 30, 2023

As an individual:

Idea of pooling since bundle == WT

This is true in the draft now, ans I agree we should be using an abstraction that is not transport specific in the data model. In my mind, a bundle is fundamentally a unit of pooling. They are for cases where you want to pull two sets of tracks and want the scheduler to apply a higher level of between them.

Round robin as the natural ordering - May be

If you caught WebTransport issue 102 discussion yesterday, we talked about a few different ways one could prioritize between pooled flows. (obviously the context here is WT but generalizes to bundle). Round robin is one scheme, but I agree there are others (eg: weighted RR or even strict ordering).

Send order as the forward preference

I also think we want flexibility in prioritization signaling at this stage. I wonder if we could take a page out of the HTTP book and replace sendOrder(i) with something that is both interoperable and has room for experimentation and extension.

object priorities and not tied to bundle or the WT session but operate across all tracks within a QUIC connections

Same observation as in #109 -- if you are scoping your priorities to the entire QUIC connection, that's the same to me as "this QUIC connection has a single bundle". Do you have another use for bundles besides this?

@suhasHere
Copy link
Collaborator Author

suhasHere commented Mar 30, 2023

This is true in the draft now, ans I agree we should be using an abstraction that is not transport specific in the data model. In my mind, a bundle is fundamentally a unit of pooling. They are for cases where you want to pull two sets of tracks and want the scheduler to apply a higher level of between them.

Pooling as a construct for prioritization domain with send order is one way to set the forwarding decisions but there are other ways as well ,like, with object priorities proposal Christian presented.

For my understanding, If an entire QUIC connection is a single bundle, does it mean that a relay node can take tracks from multiple bundles and put them in a single egress QUIC connection ? Also if one is allowed to do so , I am not too sure utility of the bundle at that point.
IIUC, for the relays , bundle needs to be optional and in many cases the ingress and egress flows at Relays vary depending on various factors that is outside the control of bundle.

@suhasHere
Copy link
Collaborator Author

I also think we want flexibility in prioritization signaling at this stage. I wonder if we could take a page out of the HTTP book and replace sendOrder(i) with something that is both interoperable and has room for experimentation and extension.

I suggested this to Luke, but might be worth exploring to see if it make sense.
If Forward_Preference (yes a new name for just discussion sake :-)) field of the object header is set as varint, then I think same field can be used for strict priorities (0-7) value and also some form of increasing send_order .. might need more thinking, but just a strwaman here

@afrind
Copy link
Collaborator

afrind commented Mar 30, 2023

As an individual:

Pooling as a construct for prioritization domain with send order is one way to set the forwarding decisions but there are other ways as well ,like, with object priorities proposal Christian presented.

I think this comes down to what abstraction layers applications will need to simply express the priorities of multiple coexistent flows. Probably these ideas are isomorphic or close to isomorphic with each other, so they relays could send the packets in the ~right order regardless of the scheme chosen. The relay just needs to know what data to send next, or drop, etc. However, HTTP/2 showed that the abstractions and APIs matter a lot, and if senders of media can't easily express the priority they need, they may not use the prioritization system to its full advantage.

For my understanding, If an entire QUIC connection is a single bundle, does it mean that a relay node can take tracks from multiple bundles and put them in a single egress QUIC connection ?

In my thinking a QUIC connection is one or more bundles. Yes a relay can take tracks from multiple bundles and egress them in a single QUIC connection. The scenarios draft example works -- two conference participants each with 1 QUIC connection sending 1 bundle to a relay with a mixer selecting and merging them together into an egress connection of another participant.

Also if one is allowed to do so , I am not too sure utility of the bundle at that point.

I think my best example where I need something like bundle for prioritization is on a QUIC connection between two relays carrying media streams from different conferences: Yes within each conference there's an understanding of how to prioritize tracks and objects, but at a high level there is no prioritization between them - every conference gets an equal share of the bandwidth.

If Forward_Preference (yes a new name for just discussion sake :-)) field of the object header is set as varint, then I think same field can be used for strict priorities (0-7) value and also some form of increasing send_order .. might need more thinking, but just a strwaman here

No naming bikesheds :D I think something with more structure than a single integer would be more interoperable, but I don't have a preference on the shape right now.

@suhasHere
Copy link
Collaborator Author

suhasHere commented Mar 30, 2023

I think my best example where I need something like bundle for prioritization is on a QUIC connection between two relays carrying media streams from different conferences: Yes within each conference there's an understanding of how to prioritize tracks and objects, but at a high level there is no prioritization between them - every conference gets an equal share of the bandwidth.

That is one possible way a relay might want to do such allocation. Its more nuanced than that. I feel we are forcing ourselves into a corner by expecting bundles to be carried end to end. Its relay's choice.

Since we are using WT sessions and they are mapped to bundle, we are now extending a requirement for every hop to carry the WT sessions from ingress to egress (1:1) to retain the bundle/prioritization domain. This is further leading to have pooling requirement.

If i look from QUIC perspective, I am dealing with streams and priorities. There are ways to address the prioritization across all the tracks within a QUIC connection without requiring the bundle as presented yesterday. We have implemented it and it does work as well.

I am not opposing to WT usage and I support it. But the fact that we are enforcing the prioritization domain bound to it and the list of further downstream requirements off it at evert hop is very restricting.

May be a compromise is, we carry an bool flag that indicates every hop to bundle or not to bundle things together and describe the implications. As you pointed out, Relays look at object header field called "see this to decide what to do next" and other than that rest is all application level semantics, which may or may not be needed.

@afrind
Copy link
Collaborator

afrind commented Mar 30, 2023

As an individual:

That is one possible way a relay might want to do such allocation. Its more nuanced than that. I feel we are forcing ourselves into a corner by expecting bundles to be carried end to end. Its relay's choice.

I don't think bundles are end-to-end - maybe that's the sticky point? I'm making a comment in #109. I'm not sure who else might agree.

WT

I never said WT you said WT :D I'm trying to reframe the language to be transport independent as much as possible.

This is further leading to have pooling requirement.

Supporting pooling is a requirement for Meta's use cases at least (I've heard Will express for Akamai also) so we can't ignore those. I agree we shouldn't require implementations to support > 1 bundle on a connection if they don't want to.

@fluffy
Copy link
Contributor

fluffy commented Mar 30, 2023

The way I head Meta use case expressed before could be solved with things other than pooling so I'm not sure I know what that use case. I think it would be very good if that use case could be explicitly described in a few sentences here.

@afrind
Copy link
Collaborator

afrind commented Mar 30, 2023

As an individual:

I think it would be very good if that use case could be explicitly described in a few sentences here.

This is one:

I think my best example where I need something like bundle for prioritization is on a QUIC connection between two relays carrying media streams from different conferences: Yes within each conference there's an understanding of how to prioritize tracks and objects, but at a high level there is no prioritization between them - every conference gets an equal share of the bandwidth.

The other was mentioned on the list (playing and prefetching multiple unrelated videos in feed simultaneously, sharing bandwidth with different high level priority).

The way I head Meta use case expressed before could be solved with things other than pooling

Pooling is not the only way to solve this problem I suppose. But removing a way to prioritize an entire set of tracks together at a high level will (I think) dramatically complicate the problem for senders to express the right priority.

@suhasHere
Copy link
Collaborator Author

I never said WT you said WT :D I'm trying to reframe the language to be transport independent as much as possible.

I understand, the reason i linked it is due to the requirements comes from having to have separate WT sessions pooled over one QUIC connection.If one uses object priorities, we can still get a priority schemes that is fair (incremental=false and incremental=true).

Pooling is not the only way to solve this problem I suppose. But removing a way to prioritize an entire set of tracks together at a high level will (I think) dramatically complicate the problem for senders to express the right priority.

Pooling allows each track to have certain forwarding preference. You can reach the same end goal without pooling too. Round robin is not always the best way to go between these pools either. As i referred to, it is more nuanced than this.

@fluffy
Copy link
Contributor

fluffy commented Mar 31, 2023

So there is a key think I want to understand about what people think the requirement for fairness is. Imagine the following case:

A, B, C are sending video to a single relay R and . X and Y are receiving the video from R.

A is sending a track to X, B is sending a track to X, and C is sending a track to Y. The only congested link is a shared link coming out or R to both X and Y.

So do X and Y get equal amounts of bandwidth, or does X get twice as much bandwidth as Y because it has tracks from two senders ( A and B ) while Y has only data from one sender ( X )?

We have to understand what we are trying to accomplish here or we are just going to to go around in circles.

@afrind
Copy link
Collaborator

afrind commented Mar 31, 2023

As a priority enthusiast:

Do X and Y share a QUIC connection? As I read your example it doesn't sound like it. I don't think this this wg is working on prioritizing flows across different QUIC connections on the same link, but rather on prioritizing data within a single QUIC connection. So in that case, the congestion controllers of the QUIC connections decide how bandwidth is allocated to each.

Pooling comes into play if you extend the above example to have two relays R1 and R2, connected by a single QUIC connection carrying video for A, B and C together.

@fluffy
Copy link
Contributor

fluffy commented Mar 31, 2023

Yes, in my example I mean to have X and Y are separate clients and not on same computer much less same QUIC connection. But they shared the congested link coming out of R.

I agree with Alan's analysis of what will happen here but I was more trying to sort out what we were trying to have happen as it seemed like Luke wanted the prioritization to be such that the bandwidth was allocated as a fraction of number of senders not receivers.

@suhasHere suhasHere self-assigned this Mar 31, 2023
@kixelated
Copy link
Collaborator

We should loosen up the text for now to enable experimentation. In fact maybe we put the prioritization scheme/mode on the wire (for a collection of tracks).

But I strongly believe we'll need a specific prioritization scheme between tracks at some point. It cannot be optional and up to the relays.

@acbegen
Copy link

acbegen commented Apr 7, 2023

So do X and Y get equal amounts of bandwidth, or does X get twice as much bandwidth as Y because it has tracks from two senders ( A and B ) while Y has only data from one sender ( X )?

We have to understand what we are trying to accomplish here or we are just going to to go around in circles.

This is something we refer to as "utility fairness." If you say "fairness" only, people assume fairness in terms of bitrate/bandwidth - which is far from ideal in many cases. The problem here is that maybe Y will fan out downstream and the total number of receivers for C will be way more than two. Unless you have explicit signaling, there is no way for R to know this, and hence, make the right decision. @afrind's scenario (R1 and R2) is practical but is certainly not the only scenario we will ever face. And bundling as it stands now does not seem to help with most other scenarios.

@suhasHere
Copy link
Collaborator Author

Being discussed as part of #139

@afrind
Copy link
Collaborator

afrind commented Apr 21, 2023

@suhasHere - should we merge/close this issue then?

@afrind
Copy link
Collaborator

afrind commented May 5, 2023

I'm going to mark this closed based on 139.

@afrind afrind closed this as completed May 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants