Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Anti-affinity for unreliable datagrams #5

Closed
LPardue opened this issue Mar 31, 2020 · 3 comments
Closed

Anti-affinity for unreliable datagrams #5

LPardue opened this issue Mar 31, 2020 · 3 comments

Comments

@LPardue
Copy link
Member

LPardue commented Mar 31, 2020

Coming out of some discussion at the WebTransport BoF during IETF 107, @enygren created an issue on the WebTransport API w3c/webtransport#109 (comment):

There should be a way to specify that unreliable datagrams do not end up in the same packet, at least when the underlying QUIC or HTTP/3 interface is used. For HTTP/2, an equivalent behavior may be a way to indicate which packets get dropped or thinned when this is needed (eg, to disprefer adjacent packets from being dropped).

While I don't think the DATAGRAM draft itself should concern itself with the API too much, I do wonder if there are some guidance or considerations that could be captured about coalescing of DATAGRAM frames in packets.

@DavidSchinazi
Copy link
Contributor

I'm not sure what the value-add is here. Losses are often bursty so having two DATAGRAM frames in the same QUIC packet or in two separate QUIC packets sent back-to-back is likely to produce the same result.

@nibanks
Copy link
Member

nibanks commented May 7, 2020

I tend to agree with @DavidSchinazi here. Assuming tail loss, which usually happens because a node in the path doesn't have enough buffer for in incoming packet that it must forward along, having all the data in one contiguous packet, or two separate, back to back packets would likely result in the same loss pattern.

Either way, this seems like an implementation design decision (do I expose a knob for this or not?) and not really a spec decision.

@LPardue
Copy link
Member Author

LPardue commented May 7, 2020

The problem I saw was that, being aware that losses happen and they can be bursty, and that the frames are unrecoverable, there are slightly different tradeoffs when deciding how to pack DATAGRAM frames vs how to pack STREAM frames.

When frames per packet is roughly equal to the packet lost count, there is negligible difference I agree. When the frames-per-packet count gets larger there is greater risk from losses and so I think there is a problem but I've mostly convinced myself that the DATAGRAM draft cannot offer useful specific advice. Application protocols might be able to give specific guidance and there is nothing stopping them from doing so in a different spec, and implementations will just do whatever they deem most optimal.

Closing the issue, cheers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants