-
-
Notifications
You must be signed in to change notification settings - Fork 15.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HTTP/2 HPACK decoder oversized header handling causes client/server Dynamic Table inconsistencies #6209
Comments
@Scottmitch @nmittler can you take a look ? |
I've been helping @exell-christopher with debugging this. Our current understanding is that there really is no safe way to recover from an exception thrown in the Decoder except for tearing down the connection as the index tables in client and server can have gotten out of synch with no way to safely bring them back in sync. |
This is even called out in the RFC. Makes sense since HPACK is stateful. https://tools.ietf.org/html/rfc7540#section-10.5.1
Introducing a "limit for how much we allow exceeding another limit" is starting to get intense. To make things interesting we enforce the limit at 2 levels (during accumulating the header blocks, and during the decompression). So implementing the "continue to process and bit bucket" approach will not be limited in scope to |
For our use case, sending a go-away as the behavior for any overflow is not desirable. The client in our case is a proxy and the inbound connection is heavily multiplexed. With what you are suggesting, one request that exceeds the limit by even one byte causes all other requests to be dropped. Having a reasonable overflow amount allows us to continue servicing the other requests and reject just the one that was oversized, but not particularly malicious. |
@exell-christopher - Yes I understand the implications. Just requires a bit more effort to implement, and the memory upper bound would not be set by Just curious in your scenario what do you anticipate your "malicious threshold" to be vs. |
I would likely use MAX_HEADER_LIST_SIZE + (MAX_HEADER_LIST_SIZE * .25) I would expect that the .25 would be user configurable |
I'll take a stab at this...there is some other related cruft that can be cleaned up too. |
Thanks. Happy to submit a PR as well. |
@exell-christopher - Already in progress :) I'll ping you for review when I have a PR ready. |
Some information about summarize the header length,
I think that means when we sum up all headers, we should plus 32 byte for each header. |
Sorry, there is a better reference rfc7541
|
@Scottmitch Let me know if there's anything you need on this. |
@exell-christopher - All good for now thanks. I will ping you once the PR is pushed (should be soon). |
@chhsiao90 - We can consider this as a followup PR if you want. The |
@Scottmitch About the 32 bytes overhead, |
Yip this reflects the current state. This is what I described in #6209 (comment). |
OK, sorry, I missed read that.... :( |
Motivation: If the HPACK Decoder detects that SETTINGS_MAX_HEADER_LIST_SIZE has been violated it aborts immediately and sends a RST_STREAM frame for what ever stream caused the issue. Because HPACK is stateful this means that the HPACK state may become out of sync between peers, and the issue won't be detected until the next headers frame. We should make a best effort to keep processing to keep the HPACK state in sync with our peer, or completely close the connection. If the HPACK Encoder is configured to verify SETTINGS_MAX_HEADER_LIST_SIZE it checks the limit and encodes at the same time. This may result in modifying the HPACK local state but not sending the headers to the peer if SETTINGS_MAX_HEADER_LIST_SIZE is violated. This will also lead to an inconsistency in HPACK state that will be flagged at some later time. Modifications: - HPACK Decoder now has 2 levels of limits related to SETTINGS_MAX_HEADER_LIST_SIZE. The first will attempt to keep processing data and send a RST_STREAM after all data is processed. The second will send a GO_AWAY and close the entire connection. - When the HPACK Encoder enforces SETTINGS_MAX_HEADER_LIST_SIZE it should not modify the HPACK state until the size has been checked. - https://tools.ietf.org/html/rfc7540#section-6.5.2 states that the initial value of SETTINGS_MAX_HEADER_LIST_SIZE is "unlimited". We currently use 8k as a limit. We should honor the specifications default value so we don't unintentionally close a connection before the remote peer is aware of the local settings. - Remove unnecessary object allocation in DefaultHttp2HeadersDecoder and DefaultHttp2HeadersEncoder. Result: Fixes netty#6209.
Motivation: If the HPACK Decoder detects that SETTINGS_MAX_HEADER_LIST_SIZE has been violated it aborts immediately and sends a RST_STREAM frame for what ever stream caused the issue. Because HPACK is stateful this means that the HPACK state may become out of sync between peers, and the issue won't be detected until the next headers frame. We should make a best effort to keep processing to keep the HPACK state in sync with our peer, or completely close the connection. If the HPACK Encoder is configured to verify SETTINGS_MAX_HEADER_LIST_SIZE it checks the limit and encodes at the same time. This may result in modifying the HPACK local state but not sending the headers to the peer if SETTINGS_MAX_HEADER_LIST_SIZE is violated. This will also lead to an inconsistency in HPACK state that will be flagged at some later time. Modifications: - HPACK Decoder now has 2 levels of limits related to SETTINGS_MAX_HEADER_LIST_SIZE. The first will attempt to keep processing data and send a RST_STREAM after all data is processed. The second will send a GO_AWAY and close the entire connection. - When the HPACK Encoder enforces SETTINGS_MAX_HEADER_LIST_SIZE it should not modify the HPACK state until the size has been checked. - https://tools.ietf.org/html/rfc7540#section-6.5.2 states that the initial value of SETTINGS_MAX_HEADER_LIST_SIZE is "unlimited". We currently use 8k as a limit. We should honor the specifications default value so we don't unintentionally close a connection before the remote peer is aware of the local settings. - Remove unnecessary object allocation in DefaultHttp2HeadersDecoder and DefaultHttp2HeadersEncoder. Result: Fixes netty#6209.
Problem Description
When a client sends oversized headers that causes the decoder to exceed the MAX_HEADER_LIST_SIZE setting, the decoder stops processing headers, and throws a HeaderListSizeException which causes a 431, and a RST_STREAM. If the incoming request had additional headers after the header that caused the HeaderListSizeException to be thrown that would have been incrementally indexed, they are ignored. This results in subsequent requests from the same client using what the client thinks are indexed headers to be rejected with GO_AWAY COMPRESSION_ERROR as the server side decoder dynamic table does not contain the indexed headers.
Expected behavior
When oversized headers are received, netty will continue processing headers, and updating the dynamic table until all headers are received. Once header processing is complete the HeaderListSizeException should be thrown. To prevent malicious users from sending an infinite number of header frames, and tying up resources, there should be a capped MAX_HEADER_LIST_SIZE overflow that netty will accept then it should issue a GO_AWAY to the client and close the connection.
Netty version
4.1.6 and later
Repro Case
See attached maven project zip
Other things worth noting
While this case occurs when oversized headers are received, any exception that occurs during header processing that is not a connection error (GO_AWAY) can potentially cause dynamic table corruption, and should result in a GO_AWAY
I'm planning on working on a pull request to fix this, but figured this is worth some discussion, so I'm opening the issue first.
netty-decoder-debug.zip
The text was updated successfully, but these errors were encountered: