-
Notifications
You must be signed in to change notification settings - Fork 204
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Robert Wilton's QPACK Comment 3 #4802
Comments
Section 3.2.2 mentions that it is an error to attempt to insert an entry larger than the table size:
I can't actually find a place where the document says that if you fail to decode an instruction or reference (for any reason, which would include exceeding implementation limits) that it's an error. I think we might need one. |
Actually, section 6 says:
So "failing to interpret" anything results in an error. I think this includes being unable to decode an integer or string because of implementation limits. Is this sufficient or should we add something a bit more explicit? |
If section 3.2.2 already imposes a limit on the length of an entry, then it would be good to referenced that text from section 7.4. E.g. section 7.4 states: But I suspect that really the only limit that is needed here is the one in 3.2.2? I.e. it seems reasonable that a receiver should accept any string up to the maximum table size? But my other point is that it suggests that a receiver can arbitrarily choose to set a limit on the size of integers that it receives, but section 4.1.1 states: So, I guess that I'm really suggesting that this paragraph (in section 7.4):
is reworded to say something along the lines of:
It wasn't clear to me whether integers bigger than 62 bits are ever likely in practice, I presume not ... |
A decoder that actually counts insertions (instead of using modulo arithmetic) will likely fail when the integer value it uses overflows. Of course, one does not have to use |
The text is really attempting to say that an implementation MUST NOT continue to process to infinity; bounds must exist for basic sanity. What those bounds will be is implementation-dependent. Saying that they're limited to the table size is only true on the encoder stream; request streams can have string literals greater than the table size (consider the degenerate case of zero). HTTP limits could be pushed down to QPACK such that the library will refuse to decode a larger string than the implementation would accept; I don't think that's necessarily required though. |
I don't think that using arbitrarily sized integer libraries is the answer here due to the performance overheads of such libraries. My concerns are two fold: (1) It is unclear what arbitrary limit receivers should apply. E.g., is 255 bytes enough for a string? Is a 64 bit signed integer enough for an integer? (2) Senders probably won't really know that the reason the stream has failed is because they are hitting some arbitrary limit in the receiver, and even if they did then I'm not sure there is anything that they can do about it. Hence, another suggestion is to specifying minimum expected bounds for these values? Or if these bounds are effectively provided by the HTTP spec, perhaps the appropriate section in that spec. could be referenced. |
Editors, could we come to a conclusion here? |
There is already a requirement that implementations be able to decode 62 bit integers. Anything larger is really unnecessary -- we intentionally encode the "Required Insert Count" on the wire in a way that bounds its size to be much smaller practically. The requirement stems from the need to decode QUIC stream IDs. I think @dtikhonov's comment about bignums is about what an implementation might do if it overflowed its internal representation decoding the Required Insert Count. This is not a practical concern, and can be completely worked around with a "clever" implementation that is worried about more than 2^64 table insertions. HTTP also provides a setting to inform the peer of a maximum size of a field section (MAX_FIELD_SECTION_SIZE -- see HTTP/3 section 4.1.1.3), which is also a practical upper bound for any single QPACK string for HTTP/3. Perhaps the QPACK doc could provide this hint? I'll draft a PR to this effect. |
FWIW, exceeding 2^60 insertions would be a tall order given data limits in QUIC; a 64-bit integer is definitely enough space for any size or length field. Implementations can use smaller values, probably reliably. There is some need for care in those cases, but I don't see the spec as bearing any responsibility for that. @afrind, remember that MAX_FIELD_SECTION_SIZE is advisory, so I wouldn't go so far as to say "practical". |
So, do we have a proposal on this one? |
Every (sane) implementation has a limit; what's advisory is informing your peer about your limit before they hit it, or enforcing downstream limits to short-circuit errors. Any implementation could absolutely tell the decoder not to bother decoding anything larger than what they will process and just return an error. |
I'm trying to address Robert's second point:
The mechanism to address that in HTTP/3 is with the advisory setting, though it's slightly different in that the setting applies to the entire field section, and it's conceivable an implementation could have another, smaller limit for any individual field name or value. It's easier to pretend that isn't the case. For these reasons, I stumbled a bit on the exact wording of the text last week and ran out of time because of other commitments. I'm thinking something along the lines of "an implementation really ought to be able to decode a string as long as their advisory setting" and/or "if an implementation has a limit on the maximum size of string it can decode, it would be really nice to inform the peer about it with the advisory setting". Is that reasonable? Or am I barking up the completely wrong tree? |
If HTTP/3 already sets practical limits on what the QPACK needs to accept then it is fine to say that. Or to put this another way, the HTTP/3 implementations presumably are setting limits (or perhaps they are just relying on language/decoder limits). How are they deciding what those limits should be? How can they be sure that they will interoperate with all senders? I'm not an HTTP expert, so perhaps I'm missing something obvious here. |
HTTP (not HTTP/3) has historically not communicated those limits, which are implementation-dependent. You just find out if you hit them -- status codes 413 (Content Too Large) and 414 (URI Too Long) cover these, but some implementations will also send a 400 out of discretion. HTTP/2 introduced a mechanism to inform the peer what your limit is (SETTINGS_MAX_HEADER_LIST_SIZE), which lets the failure occur further downstream and saves everyone effort -- a client or downstream proxy can fail a too-large request on behalf of the server before actually transferring it, rather than discovering the limit only after the server rejects it. It's optional, though; some endpoints prefer not to disclose their limits to avoid giving information to attackers, and some endpoints don't know how big the message is going to be when it starts processing it. HTTP/3 maintains parity with that (SETTINGS_MAX_FIELD_SECTION_SIZE) and the feature is similarly optional. Regardless of whether they're communicated, though, sane implementations have a limit. In at least some implementations, this limit is configurable, so it can be increased in situations where you have mostly-trusted clients and a known usecase for very large headers or URIs. That's not a protocol element; I don't think we can reasonably say much more than that the QPACK library SHOULD/MUST be able to process the largest individual header or header section that its HTTP implementation can be configured to accept. |
Mike, thanks for the additional context. Stating something along the lines of the following would be fine with me:
|
Closing this now that the IESG have approved the document and it's with the RFC editor. |
@rgwilton said
The text was updated successfully, but these errors were encountered: