New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow channel handlers to respond to Expect: 100-continue requests #1330
Conversation
Adjust HTTP server response flow control so that channel handlers can send as many 100-continues as they like before the actual response is initiated. A single 100 is common, but sometimes applications send multiple to keep the connection from timing out while a response is being assembled. W/o the patch it seems impossible to properly react to Expect: 100-continue.
After the channel sent a 100 head, the next HTTP part must be another head.
Can one of the admins verify this patch? |
8 similar comments
Can one of the admins verify this patch? |
Can one of the admins verify this patch? |
Can one of the admins verify this patch? |
Can one of the admins verify this patch? |
Can one of the admins verify this patch? |
Can one of the admins verify this patch? |
Can one of the admins verify this patch? |
Can one of the admins verify this patch? |
BTW: Creating the PR resulted in 9! emails sent to me w/ "Can one of the admins verify this patch?". |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly looks good, thanks! Do you mind adding a test for this though?
I don't have the time nor motivation to write a test for such a trivial thing, maybe someone else is willing to go the extra mile (@fabianfett ?) |
Should be easier to read, and makes more sense too :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fab, that change is a lot nicer. I’ll mark this patch as needs help so we can find someone to write a test.
Hey, doesn't the same logic apply for other intermediate responses? For example, there is a draft for a |
Sure does, the entire 1xx space needs to be covered by this exemption. |
Well yes. I don't care too much, but I'm not sure whether you really want to add drafts to such a lib. The draft is probably problematic with clients anyways, I'm sure most HTTP clients are not prepared to receive multiple responses, unless they specifically request it via Expect. |
We shouldn't add draft status codes, but we should add logic that validates that all 1XX status codes are covered here in a sensible way. |
I'd be willing to help here. But I'm not sure I fully understand yet. The simplest addition would probably be something like this?
Or would it be cooler to have a properly named computed property on |
I think a |
But isn't |
I am inclined to propose that the |
API layers on top may also need to know this, I wouldn’t make the logic private. |
My gut feeling is that step 1 should be to implement it privately, rather than optimistically make it public because we believe it may be useful. It also solves our naming problem. 😉 |
How about |
Works for me. |
It's two files though. So make it internal? |
Hey, sorry, was a bit difficult to find some time to work on this. I added the However, I also added two tests to verify the behaviour. One of them fails though and I don't understand why. Seems like I'm missing something. Would appreciate if somebody could take a look :) |
Which test is failing and how? |
Sorry, of course. So this test fails func testProcessingResponse() throws {
let group = MultiThreadedEventLoopGroup(numberOfThreads: 1)
defer {
XCTAssertNoThrow(try group.syncShutdownGracefully())
}
let expectedHeaders = HTTPHeaders()
let accumulation = ArrayAccumulationHandler<HTTPClientResponsePart> { (parts) in
guard parts.count == 3 else {
XCTFail("wrong number of parts: \(parts.count)")
return
}
if case .head(let h) = parts[0] {
XCTAssertEqual(HTTPVersion(major: 1, minor: 1), h.version)
XCTAssertEqual(HTTPResponseStatus.processing, h.status)
XCTAssertEqual(expectedHeaders, h.headers)
} else {
XCTFail("unexpected type on index 0 \(parts[0])")
}
if case .head(let h) = parts[1] {
XCTAssertEqual(HTTPVersion(major: 1, minor: 1), h.version)
XCTAssertEqual(HTTPResponseStatus.ok, h.status)
XCTAssertEqual(expectedHeaders, h.headers)
} else {
XCTFail("unexpected type on index 0 \(parts[0])")
}
if case .end(let trailers) = parts[2] {
XCTAssertEqual(nil, trailers)
} else {
XCTFail("unexpected type on index \(parts.count - 1) \(parts[parts.count - 1])")
}
}
let httpHandler = SimpleHTTPServer(.byteBuffer)
let serverChannel = try assertNoThrowWithValue(ServerBootstrap(group: group)
.serverChannelOption(ChannelOptions.socket(SocketOptionLevel(SOL_SOCKET), SO_REUSEADDR), value: 1)
.childChannelInitializer { channel in
channel.pipeline.configureHTTPServerPipeline(withPipeliningAssistance: false)
.flatMap {
channel.pipeline.addHandler(httpHandler)
}
}.bind(host: "127.0.0.1", port: 0).wait())
defer {
XCTAssertNoThrow(try serverChannel.syncCloseAcceptingAlreadyClosed())
}
let clientChannel = try assertNoThrowWithValue(ClientBootstrap(group: group)
.channelInitializer { channel in
///channel.pipeline.addHandler(DebugInboundEventsHandler()).flatMap {
channel.pipeline.addHTTPClientHandlers()
.flatMap {
channel.pipeline.addHandler(accumulation)
}
}
.connect(to: serverChannel.localAddress!)
.wait())
defer {
XCTAssertNoThrow(try clientChannel.syncCloseAcceptingAlreadyClosed())
}
let head = HTTPRequestHead(version: HTTPVersion(major: 1, minor: 1), method: .POST, uri: "/long-task")
clientChannel.write(NIOAny(HTTPClientRequestPart.head(head)), promise: nil)
try clientChannel.writeAndFlush(NIOAny(HTTPClientRequestPart.end(nil))).wait()
accumulation.syncWaitForCompletion()
} It relies on the fact that the test server sends two responses, one case "/long-task":
let head = HTTPResponseHead(version: req.version, status: .processing)
let r = HTTPServerResponsePart.head(head)
context.writeAndFlush(self.wrapOutboundOut(r), promise: nil)
context.eventLoop.scheduleTask(in: .milliseconds(100)) {
let head = HTTPResponseHead(version: req.version, status: .ok)
let r = HTTPServerResponsePart.head(head)
context.writeAndFlush(self.wrapOutboundOut(r), promise: nil)
context.writeAndFlush(self.wrapOutboundOut(.end(nil))).recover { error in
XCTFail("unexpected error \(error)")
}.whenComplete { (_: Result<Void, Error>) -> () in
self.sentEnd = true
self.maybeClose(context: context)
} So my assumption above is in the test that |
The issue is in HTTPDecoder: specifically here. This code does not tolerate informational responses, and consumes the soliciting request. This means the actual response that follows is considered unsolicited. The current check (which slams together 1XX and 204/304 should be split, and the 1XX should be rewritten in terms of informational responses (as 101 is not). Then the informational check can be done first, before we pop the request off to validate it. The 204/304 check can still come after that pop. |
Thanks Cory, but I don't think I full understand how this is supposed to work. From your message I inferred something like this: if statusCode / 100 == 1 && statusCode != 101 {
skipBody = true
} else {
guard let method = self.requestHeads.popFirst()?.method else {
self.richerError = NIOHTTPDecoderError.unsolicitedResponse
return .error(HPE_UNKNOWN)
}
if method == .HEAD || method == .CONNECT {
skipBody = true
} else if statusCode == 204 || statusCode == 304 {
skipBody = true
}
} This did not help though... |
@t089 What exactly is the problem you're facing with the above code? That looks okay to me. |
Ah, sorry I was not paying enough attention. The test from above was still failing. But I just looked at it again, and actually it was failing with a different error: Expected parts: I guess that makes sense. So I'm happy to report that all test are green on my machine™. Interestingly the Shall I open a new PR with the changes? |
@t089 awesome! Thank you, yes, a PR would be very welcome. |
I lost track, did we actually get this 100 continue support now? :-) |
Nope, #1422 is still pending. |
pinged #1422 |
Superseded by #1985 |
Changed the HTTP response flow preconditions to allow for 100-continue heads
Motivation:
Channel handlers need to be able to respond to Expect: 100-continue requests.
Easy to reproduce w/ curl, e.g.:
Without the patch curl still works, but will wait a second or so w/ sending the request body until some timeout. Other clients may just timeout.
BTW: Not sure, maybe NIOHTTP2 needs the same patch?
Modifications:
This doesn't change any actual code, just the incorrect assumptions in the preconditions.
100-continue heads can be sent multiple times until the actual response head is sent out.
Result:
Channel handlers can write 100-continue heads to the NIOHTTP1 pipeline.