Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Semantics of SETTINGS_MAX_HEADER_LIST_SIZE #2516

Closed
kazuho opened this issue Mar 11, 2019 · 16 comments · Fixed by #2774
Closed

Semantics of SETTINGS_MAX_HEADER_LIST_SIZE #2516

kazuho opened this issue Mar 11, 2019 · 16 comments · Fixed by #2774
Labels

Comments

@kazuho
Copy link
Member

kazuho commented Mar 11, 2019

Semantics of SETTINGS_MAX_HEADER_LIST_SIZE in unclear. We should either:

  • state that it's an advisory value that an endpoint is permitted to ignore. Otherwise, an endpoint written under the impression that it's a value to be enforced might terminate the connection (note: H3 allows a stream error to be promoted to a connection error) thereby disrupting other HTTP requests inflight.
  • state that an endpoint must not exceed the value advertised by the peer.

spun off from #2510 #2511.

@kazuho
Copy link
Member Author

kazuho commented Mar 11, 2019

My preference goes to being strict. Quoting from
#2510 (comment):

Think it from an end-user perspective. Without the restriction, we might see one user-agent telling you that the request is too large, and the other processing the same request. That'd be confusing.

I also think that what @dtikhonov points out in
#2510 (comment) is an argument for making this an enforcement (or drop the settings entirely). Unless we make it an enforcement, implementations would be tempted to ignore the setting for "maximum interoperability," making the feature meaningless.

@LPardue
Copy link
Member

LPardue commented Mar 11, 2019

Thanks for creating the issue @kazuho!

Do we have some deployment experience that can weigh in here. I've heard anecdotal stories that nginx ignores the setting in HTTP/2 by default. And I've found an interesting Chrome crbug that also mentions HTTP/2 load balancers - https://bugs.chromium.org/p/chromium/issues/detail?id=751642*

@mnot mnot added the -http label Mar 11, 2019
@martinthomson
Copy link
Member

We have experience from HTTP/2, where it was clear that no intermediary was ever in a position to comply. The setting is hop-by-hop and header fields are end-to-end, so there is an inherent incompatibility. I don't see any stack being able to comply with a limit that is too low for the current message. I'd prefer to see it removed if it came to that.

@tatsuhiro-t
Copy link
Contributor

I agree to removing it.

@dtikhonov
Copy link
Member

I, too, think that removing this setting is fine.

@mikkelfj
Copy link
Contributor

Does this mean constrained devices will not need a control stream?

@RyanTheOptimist
Copy link
Contributor

My own $0.02... I would much prefer to have this and treat it more strictly. In my experience, explicit limits are much easier to reason about than hidden limits. I'd much rather fail a request early than send it to a peer only to have it fail with a limit that I know nothing about.

@afrind
Copy link
Contributor

afrind commented Mar 13, 2019

I agree why Ryan. Any reasonable server is going to have a limit and will have to fail requests that violate it. It's better to have this be explicit.

@ianswett
Copy link
Contributor

ianswett commented Mar 13, 2019

Agreed, explicit limits are good. Unexpected failures are not.

In the case of proxies, it seems like the min of the received setting and local limit could be used? But I guess the proxy mah not know all backends it may contact when it advertises a limit. I still think something is better than nothing, but I'm not sure how to fully solve the proxy case.

@dtikhonov
Copy link
Member

If this limit is not advisory, there is a good chance that some web applications will break once the underlying server is upgraded to HTTP/3. The reason is that HTTP/2 servers ignore limits sent by the clients and the browsers just take it. Hackish? Yes. But it works.

@mikkelfj
Copy link
Contributor

How about SHOULD enforce, but MAY be configured to ignore, or use a higher multiple for compat reasons.

@LPardue
Copy link
Member

LPardue commented Mar 13, 2019

An optional limit that's rarely enforced and, perhaps worse, unpredictably enforced seems like a waste of a limit.

Implementations already have limits on the size of individual headers, which trigger semantic layer errors. Why does the sum total need to trigger a syntax type of error?

Enforcing error makes this stricter than H2, hurting interoperability between implementations that support H3 and H2 at the same time.

Speculating, I can see this causing some strange issues in the wild, where clients Alt-Svc up to H3, encounter a sporadic error and then purge H3 endpoints from their Alt-Svc cache.

@kazuho
Copy link
Member Author

kazuho commented Mar 13, 2019

Yes. The problem here is the difficulty of the sender of the headers block complying to the limit advertised by the peer. Therefore, "SHOULD enforce" does not work as a fix.

@ianswett

In the case of proxies, it seems like the min of the received setting and local limit could be used? But I guess the proxy mah not know all backends it may contact when it advertises a limit. I still think something is better than nothing, but I'm not sure how to fully solve the proxy case.

FWIW, there are at least two issues regarding proxies.

  • Downstream (e.g., origin server in case of a HTTP request) having smaller limits. In such case, the proxy will read the request then send a LIMIT_EXCEEDED error without forwarding the request.
  • A proxy that streams the headers block. Such a proxy cannot tell the uncompressed size of the headers block until it receives the last byte of the HEADERS frame. But by the time that is observed, it would already have started sending the headers to the downstream.

@MikeBishop
Copy link
Contributor

MikeBishop commented Apr 24, 2019

In theory, this should advertise the limit beyond which a peer wouldn't have propagated the payload anyway; it permits, but doesn't require, the entity that's about to send the payload to send a (miniscule) error in the opposite direction / on the API rather than forwarding a message it expects to be rejected based on header size. That's purely an optimization, and it's always acceptable to just transfer the big thing and find out at the end that it failed. If browsers will consume arbitrarily-large responses, then the correct value for this setting for them is unlimited.

Because there could be multiple intermediaries, it's entirely possible that one stream could see an EXCESSIVE_LOAD error even though the headers weren't too large for the first hop upstream. That's one of the biggest reasons this needs to be a stream error -- the setting is hop-by-hop, but enforcement is ultimately on a per-request basis across the set of hops.

If used this way, it shouldn't break anything that wasn't broken before -- things that would have failed will simply fail faster. If you don't implement it, then you'll still eventually get an EXCESSIVE_LOAD or a 400.

I agree that the current text doesn't reflect this usage pattern. If we agree that this is the correct way to envision it, I'll write up better text.

@MikeBishop
Copy link
Contributor

Discussed in London; an implementation SHOULD NOT send its peer a message that's over the peer's advertised size. Text clarifying that the peer is advertising the (approximate) limit at which it will refuse to accept a message, enabling the local implementation to fail more quickly and avoid the risk that the "refusal" might be a connection termination which would disrupt legitimate messages as well.

@mikkelfj
Copy link
Contributor

mikkelfj commented Jun 3, 2019

As long as it isn't a MUST. There might be pre-canned requested of a reasonable size that generally just works, and having to check these might not be worthwhile on the odd chance of a very restrictive peer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.