New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Alternator fails to reject oversized requests if not using Content-Length #8196
Comments
I don't think it merits a seastar feature. You should read from the content stream, and if you decide it's too long, close the stream and abort the request with an error. This way the condition (too long content) isn't hard-coded into Seastar but is left to the application. |
I tend to agree. For historical context on why we had this feature in Seastar: Until recently, we didn't have a content stream feature in Scylla's HTTP server, just a content string, so we needed this option to avoid having the server read a huge string. Now that we have (and use in Alternator since a recent patch) the content stream, we indeed don't need the One small benefit of being in the Seastar server that we'll lose if we switch to a Scylla-side use of the content stream: Inside the Seastar server, we have the advance knowledge of the size of the entire request (when |
Like DynamoDB, Alternator rejects requests larger than some fixed maximum size (16MB). We had a test for this feature - test_too_large_request, but it was too blunt, and missed two issues: Refs #8195 Refs #8196 So this patch adds two better tests that reproduce these two issues: First, test_too_large_request_chunked verifies that an oversized request is detected even if the body is sent with chunked encoding. Second, both tests - test_too_large_request_chunked and test_too_large_request_content_length - verify that the rather limited (and arguably buggy) Python HTTP client is able to read the 413 status code - and doesn't report some generic I/O error. Both tests pass on DynamoDB, but fail on Alternator because of these two open issues. Signed-off-by: Nadav Har'El <nyh@scylladb.com> Message-Id: <20210302154555.1488812-1-nyh@scylladb.com>
Like DynamoDB, Alternator rejects requests larger than some fixed maximum size (16MB). We had a test for this feature - test_too_large_request, but it was too blunt, and missed two issues: Refs #8195 Refs #8196 So this patch adds two better tests that reproduce these two issues: First, test_too_large_request_chunked verifies that an oversized request is detected even if the body is sent with chunked encoding. Second, both tests - test_too_large_request_chunked and test_too_large_request_content_length - verify that the rather limited (and arguably buggy) Python HTTP client is able to read the 413 status code - and doesn't report some generic I/O error. Both tests pass on DynamoDB, but fail on Alternator because of these two open issues. Signed-off-by: Nadav Har'El <nyh@scylladb.com> Message-Id: <20210302154555.1488812-1-nyh@scylladb.com>
Like DynamoDB, Alternator rejects requests larger than some fixed maximum size (16MB). We had a test for this feature - test_too_large_request, but it was too blunt, and missed two issues: Refs #8195 Refs #8196 So this patch adds two better tests that reproduce these two issues: First, test_too_large_request_chunked verifies that an oversized request is detected even if the body is sent with chunked encoding. Second, both tests - test_too_large_request_chunked and test_too_large_request_content_length - verify that the rather limited (and arguably buggy) Python HTTP client is able to read the 413 status code - and doesn't report some generic I/O error. Both tests pass on DynamoDB, but fail on Alternator because of these two open issues. Signed-off-by: Nadav Har'El <nyh@scylladb.com> Message-Id: <20210302154555.1488812-1-nyh@scylladb.com>
Alternator places a limit on the request size, using Seastar's HTTP server feature of
content_length_limit
.What the Seastar code does is just to check the
Content-Length
header... But a long request can be sent without such a header - using chunked-encoding or if the connection is to be closed after the request - even without any pre-specified length at all. Such requests are currently allowed at any length, which is wrong.This issue can be reproduced by the test:
test_manual_requests.py::test_too_large_request_chunked
We could add support in Seastar for this case as well: We have now a
content_stream
stream, and when the N+1th byte is read from the stream, or perhaps earlier when the next chunked-encoding chunk is known to go beyond position N, the read from the stream could throw an exception. This exception would cause the request handler to abort what it's doing. If the first Content-Length or chunk already go beyond N, we don't have to run the handler at all (this is what happens today).Because the
content_length_limit
is a Seastar feature, we will need to open a corresponding issue for Seastar. We actually have two choices on what to do in Seastar:The text was updated successfully, but these errors were encountered: