-
Notifications
You must be signed in to change notification settings - Fork 25.6k
handle 100-continue and oversized streaming request #112179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
handle 100-continue and oversized streaming request #112179
Conversation
Pinging @elastic/es-distributed (Team:Distributed) |
// ensures that server reply 413-too-large on oversized chunked encoding request and closes connection | ||
public void test413TooLargeOnChunkedEncoding() throws Exception { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is the right behaviour. We might have already started processing the chunked request, but 413
implies we are completely rejecting it according to RFC 9110 §15.5.14:
The 413 (Content Too Large) status code indicates that the server is refusing to process a request because the request content is larger than the server is willing or able to process.
Instead, we need to let the RestHandler
deal with this situation, so that different handlers can respond in different ways. For instance the _bulk
API should report the usual doc-level responses for any successfully-processed docs, and should return a doc-level 429
for docs past the limit. Other APIs that aggregate the whole request up-front can still reasonably return a 413
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now I remember, we talked about it. I tried to make it similar to what http-object-aggregator does. I will remove rejection for chunked request.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
This commit back ports all of the work introduced in: #113044 * #111438 - 5e1f655 * #111865 - 478baf1 * #112179 - 1b77421 * #112227 - cbcbc34 * #112267 - c00768a * #112154 - a03fb12 * #112479 - 95b42a7 * #112608 - ce2d648 * #112629 - 0d55dc6 * #112767 - 2dbbd7d * #112724 - 58e3a39 * dce8a0b * #112974 - 92daeeb * 529d349 * #113161 - e3424bd
Extend Netty4HttpAggregator to handle 100-continue, 413/417 for partial content. I reuse public API from MessageAggregator that handles 100-continue. For known content length I dont close connection after 413/417. For chunked content there is no limit, rest handler will decide when to stop.
Added integ tests.