Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HTTP/2 HPACK decoder oversized header handling causes client/server Dynamic Table inconsistencies #6209

Closed
exell-christopher opened this issue Jan 12, 2017 · 17 comments
Assignees
Labels
Milestone

Comments

@exell-christopher
Copy link

exell-christopher commented Jan 12, 2017

Problem Description

When a client sends oversized headers that causes the decoder to exceed the MAX_HEADER_LIST_SIZE setting, the decoder stops processing headers, and throws a HeaderListSizeException which causes a 431, and a RST_STREAM. If the incoming request had additional headers after the header that caused the HeaderListSizeException to be thrown that would have been incrementally indexed, they are ignored. This results in subsequent requests from the same client using what the client thinks are indexed headers to be rejected with GO_AWAY COMPRESSION_ERROR as the server side decoder dynamic table does not contain the indexed headers.

Expected behavior

When oversized headers are received, netty will continue processing headers, and updating the dynamic table until all headers are received. Once header processing is complete the HeaderListSizeException should be thrown. To prevent malicious users from sending an infinite number of header frames, and tying up resources, there should be a capped MAX_HEADER_LIST_SIZE overflow that netty will accept then it should issue a GO_AWAY to the client and close the connection.

Netty version

4.1.6 and later

Repro Case

See attached maven project zip

Other things worth noting

While this case occurs when oversized headers are received, any exception that occurs during header processing that is not a connection error (GO_AWAY) can potentially cause dynamic table corruption, and should result in a GO_AWAY

I'm planning on working on a pull request to fix this, but figured this is worth some discussion, so I'm opening the issue first.
netty-decoder-debug.zip

@normanmaurer
Copy link
Member

@Scottmitch @nmittler can you take a look ?

@nhnFreespirit
Copy link
Contributor

I've been helping @exell-christopher with debugging this.

Our current understanding is that there really is no safe way to recover from an exception thrown in the Decoder except for tearing down the connection as the index tables in client and server can have gotten out of synch with no way to safely bring them back in sync.

@Scottmitch
Copy link
Member

This is even called out in the RFC. Makes sense since HPACK is stateful.

https://tools.ietf.org/html/rfc7540#section-10.5.1

The header block MUST be processed to ensure a consistent
connection state, unless the connection is closed.

Introducing a "limit for how much we allow exceeding another limit" is starting to get intense. To make things interesting we enforce the limit at 2 levels (during accumulating the header blocks, and during the decompression). So implementing the "continue to process and bit bucket" approach will not be limited in scope to hpack.Decoder. A simpler alternative is just to GO_AWAY/close the connection if the peer violates the settings we advertise.

@exell-christopher
Copy link
Author

For our use case, sending a go-away as the behavior for any overflow is not desirable. The client in our case is a proxy and the inbound connection is heavily multiplexed. With what you are suggesting, one request that exceeds the limit by even one byte causes all other requests to be dropped. Having a reasonable overflow amount allows us to continue servicing the other requests and reject just the one that was oversized, but not particularly malicious.

@Scottmitch
Copy link
Member

Scottmitch commented Jan 12, 2017

@exell-christopher - Yes I understand the implications. Just requires a bit more effort to implement, and the memory upper bound would not be set by SETTINGS_MAX_HEADER_LIST_SIZE but some other implementation dependent limit. This is not necessarily a bad thing but just needs to be communicated clearly (via javadocs, docs, etc...).

Just curious in your scenario what do you anticipate your "malicious threshold" to be vs. SETTINGS_MAX_HEADER_LIST_SIZE?

@exell-christopher
Copy link
Author

I would likely use MAX_HEADER_LIST_SIZE + (MAX_HEADER_LIST_SIZE * .25) I would expect that the .25 would be user configurable

@Scottmitch Scottmitch self-assigned this Jan 13, 2017
@Scottmitch
Copy link
Member

Scottmitch commented Jan 13, 2017

I'll take a stab at this...there is some other related cruft that can be cleaned up too.

@exell-christopher
Copy link
Author

Thanks. Happy to submit a PR as well.

@Scottmitch
Copy link
Member

@exell-christopher - Already in progress :) I'll ping you for review when I have a PR ready.

@chhsiao90
Copy link
Contributor

Some information about summarize the header length,
In rfc7540

SETTINGS_MAX_HEADER_LIST_SIZE (0x6): This advisory setting informs a
peer of the maximum size of header list that the sender is
prepared to accept, in octets. The value is based on the
uncompressed size of header fields, including the length of the
name and value in octets plus an overhead of 32 octets for each
header field.

I think that means when we sum up all headers, we should plus 32 byte for each header.
Maybe it could be implemented at the same time! Decoder.jara#L404

@chhsiao90
Copy link
Contributor

chhsiao90 commented Jan 16, 2017

Sorry, there is a better reference rfc7541

The size of an entry is the sum of its name's length in octets (as
defined in Section 5.2), its value's length in octets, and 32.

@exell-christopher
Copy link
Author

@Scottmitch Let me know if there's anything you need on this.

@Scottmitch
Copy link
Member

@exell-christopher - All good for now thanks. I will ping you once the PR is pushed (should be soon).

@Scottmitch
Copy link
Member

@chhsiao90 - We can consider this as a followup PR if you want. The SETTINGS_MAX_HEADER_LIST_SIZE is not required to be strictly enforced. We account for the 32 bits of overhead on the encoding size, but not on the decode size. So we are currently following the "be liberal in what you accept and conservative in what you send" principle which I'm not yet convinced is a problem in this case.

@chhsiao90
Copy link
Contributor

@Scottmitch
Sure, I will rebase once the PR is merged.

About the 32 bytes overhead,
I found that encoder will calculate the header size by add 32 bytes for each header. Encoder
But the decoder didn't add the 32 bytes in current 4.1 branch.
Or maybe I missed that the 32 bytes is added in somewhere when encoding.

@Scottmitch
Copy link
Member

I found that encoder will calculate the header size by add 32 ... But the decoder didn't add the 32 bytes

Yip this reflects the current state. This is what I described in #6209 (comment).

@chhsiao90
Copy link
Contributor

OK, sorry, I missed read that.... :(
So, the 32 bytes overhead could implement or not depends on need!
Thanks for your clarify!

Scottmitch added a commit to Scottmitch/netty that referenced this issue Jan 19, 2017
Motivation:
If the HPACK Decoder detects that SETTINGS_MAX_HEADER_LIST_SIZE has been violated it aborts immediately and sends a RST_STREAM frame for what ever stream caused the issue. Because HPACK is stateful this means that the HPACK state may become out of sync between peers, and the issue won't be detected until the next headers frame. We should make a best effort to keep processing to keep the HPACK state in sync with our peer, or completely close the connection.
If the HPACK Encoder is configured to verify SETTINGS_MAX_HEADER_LIST_SIZE it checks the limit and encodes at the same time. This may result in modifying the HPACK local state but not sending the headers to the peer if SETTINGS_MAX_HEADER_LIST_SIZE is violated. This will also lead to an inconsistency in HPACK state that will be flagged at some later time.

Modifications:
- HPACK Decoder now has 2 levels of limits related to SETTINGS_MAX_HEADER_LIST_SIZE. The first will attempt to keep processing data and send a RST_STREAM after all data is processed. The second will send a GO_AWAY and close the entire connection.
- When the HPACK Encoder enforces SETTINGS_MAX_HEADER_LIST_SIZE it should not modify the HPACK state until the size has been checked.
- https://tools.ietf.org/html/rfc7540#section-6.5.2 states that the initial value of SETTINGS_MAX_HEADER_LIST_SIZE is "unlimited". We currently use 8k as a limit. We should honor the specifications default value so we don't unintentionally close a connection before the remote peer is aware of the local settings.
- Remove unnecessary object allocation in DefaultHttp2HeadersDecoder and DefaultHttp2HeadersEncoder.

Result:
Fixes netty#6209.
liuzhengyang pushed a commit to liuzhengyang/netty that referenced this issue Sep 10, 2017
Motivation:
If the HPACK Decoder detects that SETTINGS_MAX_HEADER_LIST_SIZE has been violated it aborts immediately and sends a RST_STREAM frame for what ever stream caused the issue. Because HPACK is stateful this means that the HPACK state may become out of sync between peers, and the issue won't be detected until the next headers frame. We should make a best effort to keep processing to keep the HPACK state in sync with our peer, or completely close the connection.
If the HPACK Encoder is configured to verify SETTINGS_MAX_HEADER_LIST_SIZE it checks the limit and encodes at the same time. This may result in modifying the HPACK local state but not sending the headers to the peer if SETTINGS_MAX_HEADER_LIST_SIZE is violated. This will also lead to an inconsistency in HPACK state that will be flagged at some later time.

Modifications:
- HPACK Decoder now has 2 levels of limits related to SETTINGS_MAX_HEADER_LIST_SIZE. The first will attempt to keep processing data and send a RST_STREAM after all data is processed. The second will send a GO_AWAY and close the entire connection.
- When the HPACK Encoder enforces SETTINGS_MAX_HEADER_LIST_SIZE it should not modify the HPACK state until the size has been checked.
- https://tools.ietf.org/html/rfc7540#section-6.5.2 states that the initial value of SETTINGS_MAX_HEADER_LIST_SIZE is "unlimited". We currently use 8k as a limit. We should honor the specifications default value so we don't unintentionally close a connection before the remote peer is aware of the local settings.
- Remove unnecessary object allocation in DefaultHttp2HeadersDecoder and DefaultHttp2HeadersEncoder.

Result:
Fixes netty#6209.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants