Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.Sign up
New header format: proposal to reduce the amount of code points implicitly consumed #56
This way the implicit Content Type allocation is halved to [48, 63] and we get back 16 codepoints.
I think that we can do better. I've just opened #58. That uses all the bits in that first octet, which would seem to be a regression, but there are two ways in which we could get another bit back.
The first option is to drop the S-bit I propose and have sequence numbers be 16 bits always. I'd be sad that we lost the small optimization, but that isn't the end of the world.
Perhaps a better option is to drop the second epoch bit. This seems like a bad idea because you potentially have multiple epochs inbound to a server during a 0-RTT handshake. In that case, you have to distinguish between ClientHello (001 or 0011 would work for that), 0-RTT (epoch 1), Handshake (epoch 2), and the ultimate Application Data (epoch 3). If you only have one bit of epoch, then 1 and 3 are indistinguishable.
However, if you look at the way we handle 0-RTT in stacks, it might be OK to allow those late-arriving 0-RTT records to be dropped. For instance, NSS processes 0-RTT and handshake linearly. We don't read the Finished until we have the EndOfEarlyData, and as soon as we process the EOED, we stop accepting new 0-RTT. Part of the reason for this is that once we report handshake success, we don't provide any other signal to an application about the data it receives. So, for NSS, dropping 0-RTT when 1-RTT is available wouldn't change anything for us.
The final option is to not worry about this. The codepoints are there to use. The number of public bits we have is small, and we don't need to signal content type in the clear any more. As I suggested in discussing tlswg/dtls-conn-id#11, perhaps the right answer is to backport the TLS 1.3 record format to DTLS 1.2 and use only one codepoint.
With regards to 1. Losing the ability to signal 8- v 16-bits sequence would be a bit of a shame, especially in constrained networks (@hannestschofenig, opinions?), but I agree with you it's not the end of the world and I'd be more than happy to immolate it to get one of the reserved bits back. In fact, one bit is sufficient to build a whole header extension machinery if we'd need to -- for spin bit, VEC, etc. I value that much more than a tiny optimisation.
WRT 2. Your argument in favour of dropping the second epoch bit is pretty convincing, so personally I'd be OK with one-bit epoch.
So, I guess we could have something like:
and 16 bits sequence, always?
The appalling amount of cycles we have been spending on devising how to do CID signalling in 1.2 is a very good argument in favour of having some kind of extension built in in 1.3. Next time we realise we missed something we won't have to struggle like Houdini. To me this is far more important than a small optimisation of undetermined benefit.