-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Anti-affinity for unreliable datagrams #109
Comments
I wonder if this is also a consideration for the DATAGRAM draft itself |
The API doesn't currently support FEC, pending definition of support within IETF QUIC. But this could probably be supported in future via an (optional) argument within the QuicTransport constructor. For unidirectional streams we did provide for future extensibility via an optional argument
The |
I'm not sure what the value-add is here. Losses are often bursty so having two DATAGRAM frames in the same QUIC packet or in two separate QUIC packets sent back-to-back is likely to produce the same result. |
I don't disagree that burst losses could hamper the intended goals mentioned. But I do find it interesting that draft-vvv-webtransport-overview seems to have an interesting recommendation that I'd forgotten about:
|
I guess the purpose is to make the application able to reason about how its unreliable datagrams are aggregated: if the aggregation isn't known, the application can do useless and unhelpful things. Consider sending a player's update information at 60 Hz unreliably. (Why not? It's tiny - Position, velocity, maybe a few other numbers, a timestamp - and more Hz means a better experience - and if some updates drop, so be it). If the transport/browser aggregates datagrams at 15 Hz, 3/4s of the data is always wasted: only the last datagram matters. Either the browser could provide specific information about how datagrams will be aggregated, or it could indeed 'not apply aggregation algorithms to (unreliable) datagrams' - so that when the application chooses a send speed, that's the speed at which individual packets are actually transmitted, and after a series of drops the data stream comes back online ASAP. |
Can someone clarify the status quo of the datagram draft and current implementations? The cited section sounds non-mandatory. Perhaps it can be made mandatory or this API can add a control mechanism. |
@vasilvv - can you address the question above - "Can someone clarify the status quo of the datagram draft and current implementations?" |
QUIC DATAGRAM is an adopted draft in the QUIC WG. It has a small number of open issues, which are due to be resolved. The expectation is that the draft will be ready for a WG last call in the order of months. There are multiple interoperable implementations. There is also an HTTP/3 DATAGRAM draft in the MASQUE WG, which is a dependency for WebTransport now that the protocol is HTTP/3 based. Timeline for this is further out. There are multiple interoperable implementations. |
Based on latest spec, a sender never aggregates datagrams:
In other words: const writer = wt.datagrams.writable.getWriter();
writer.write(new Uint8Array([1]));
writer.write(new Uint8Array([2])); ...will send two datagrams, whereas: writer.write(new Uint8Array([1, 2])); ...will send one. We also don't aggregate in receiveDatagrams, so I think we can close this. |
There should be a way to specify that unreliable datagrams do not end up in the same packet, at least when the underlying QUIC or HTTP/3 interface is used. For HTTP/2, an equivalent behavior may be a way to indicate which packets get dropped or thinned when this is needed (eg, to disprefer adjacent packets from being dropped).
One use-case for this would be to allow FEC mechanisms to be implemented on top of WebTransport. While correlated loss is going to happen no matter what we do, at least being able to prevent datagrams from ending up in the same underlying packet will help.
Interface could range from a stream attribute (simplest) to some way to color/label datagrams (more complex) to some way to relate datagrams (more complex and probably too complex).
The text was updated successfully, but these errors were encountered: