-
Notifications
You must be signed in to change notification settings - Fork 203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Semantics of SETTINGS_MAX_HEADER_LIST_SIZE #2516
Comments
My preference goes to being strict. Quoting from
I also think that what @dtikhonov points out in |
Thanks for creating the issue @kazuho! Do we have some deployment experience that can weigh in here. I've heard anecdotal stories that nginx ignores the setting in HTTP/2 by default. And I've found an interesting Chrome crbug that also mentions HTTP/2 load balancers - https://bugs.chromium.org/p/chromium/issues/detail?id=751642* |
We have experience from HTTP/2, where it was clear that no intermediary was ever in a position to comply. The setting is hop-by-hop and header fields are end-to-end, so there is an inherent incompatibility. I don't see any stack being able to comply with a limit that is too low for the current message. I'd prefer to see it removed if it came to that. |
I agree to removing it. |
I, too, think that removing this setting is fine. |
Does this mean constrained devices will not need a control stream? |
My own $0.02... I would much prefer to have this and treat it more strictly. In my experience, explicit limits are much easier to reason about than hidden limits. I'd much rather fail a request early than send it to a peer only to have it fail with a limit that I know nothing about. |
I agree why Ryan. Any reasonable server is going to have a limit and will have to fail requests that violate it. It's better to have this be explicit. |
Agreed, explicit limits are good. Unexpected failures are not. In the case of proxies, it seems like the min of the received setting and local limit could be used? But I guess the proxy mah not know all backends it may contact when it advertises a limit. I still think something is better than nothing, but I'm not sure how to fully solve the proxy case. |
If this limit is not advisory, there is a good chance that some web applications will break once the underlying server is upgraded to HTTP/3. The reason is that HTTP/2 servers ignore limits sent by the clients and the browsers just take it. Hackish? Yes. But it works. |
How about SHOULD enforce, but MAY be configured to ignore, or use a higher multiple for compat reasons. |
An optional limit that's rarely enforced and, perhaps worse, unpredictably enforced seems like a waste of a limit. Implementations already have limits on the size of individual headers, which trigger semantic layer errors. Why does the sum total need to trigger a syntax type of error? Enforcing error makes this stricter than H2, hurting interoperability between implementations that support H3 and H2 at the same time. Speculating, I can see this causing some strange issues in the wild, where clients Alt-Svc up to H3, encounter a sporadic error and then purge H3 endpoints from their Alt-Svc cache. |
Yes. The problem here is the difficulty of the sender of the headers block complying to the limit advertised by the peer. Therefore, "SHOULD enforce" does not work as a fix.
FWIW, there are at least two issues regarding proxies.
|
In theory, this should advertise the limit beyond which a peer wouldn't have propagated the payload anyway; it permits, but doesn't require, the entity that's about to send the payload to send a (miniscule) error in the opposite direction / on the API rather than forwarding a message it expects to be rejected based on header size. That's purely an optimization, and it's always acceptable to just transfer the big thing and find out at the end that it failed. If browsers will consume arbitrarily-large responses, then the correct value for this setting for them is unlimited. Because there could be multiple intermediaries, it's entirely possible that one stream could see an EXCESSIVE_LOAD error even though the headers weren't too large for the first hop upstream. That's one of the biggest reasons this needs to be a stream error -- the setting is hop-by-hop, but enforcement is ultimately on a per-request basis across the set of hops. If used this way, it shouldn't break anything that wasn't broken before -- things that would have failed will simply fail faster. If you don't implement it, then you'll still eventually get an EXCESSIVE_LOAD or a 400. I agree that the current text doesn't reflect this usage pattern. If we agree that this is the correct way to envision it, I'll write up better text. |
Discussed in London; an implementation SHOULD NOT send its peer a message that's over the peer's advertised size. Text clarifying that the peer is advertising the (approximate) limit at which it will refuse to accept a message, enabling the local implementation to fail more quickly and avoid the risk that the "refusal" might be a connection termination which would disrupt legitimate messages as well. |
As long as it isn't a MUST. There might be pre-canned requested of a reasonable size that generally just works, and having to check these might not be worthwhile on the odd chance of a very restrictive peer. |
Semantics of SETTINGS_MAX_HEADER_LIST_SIZE in unclear. We should either:
spun off from #2510 #2511.
The text was updated successfully, but these errors were encountered: