Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow channel handlers to respond to Expect: 100-continue requests #1330

Closed
wants to merge 3 commits into from
Closed

Allow channel handlers to respond to Expect: 100-continue requests #1330

wants to merge 3 commits into from

Conversation

helje5
Copy link
Contributor

@helje5 helje5 commented Jan 2, 2020

Changed the HTTP response flow preconditions to allow for 100-continue heads

Motivation:

Channel handlers need to be able to respond to Expect: 100-continue requests.

Easy to reproduce w/ curl, e.g.:

$ curl --verbose  -F a=10 -F b=20 http://localhost:1337/
*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 1337 (#0)
> POST / HTTP/1.1
> Host: localhost:1337
> User-Agent: curl/7.54.0
> Accept: */*
> Content-Length: 230
> Expect: 100-continue             // need to be able to respond to this
> Content-Type: multipart/form-data; boundary=------------------------0546700c49d7551e
> 
< HTTP/1.1 100 Continue           // w/ the patch
< HTTP/1.1 200 OK
< Content-Type: text/html
< transfer-encoding: chunked

Without the patch curl still works, but will wait a second or so w/ sending the request body until some timeout. Other clients may just timeout.

BTW: Not sure, maybe NIOHTTP2 needs the same patch?

Modifications:

This doesn't change any actual code, just the incorrect assumptions in the preconditions.
100-continue heads can be sent multiple times until the actual response head is sent out.

Result:

Channel handlers can write 100-continue heads to the NIOHTTP1 pipeline.

Adjust HTTP server response flow control so that channel handlers can send as many 100-continues as they like before the actual response is initiated.
A single 100 is common, but sometimes applications send multiple to keep the
connection from timing out while a response is being assembled.

W/o the patch it seems impossible to properly react to Expect: 100-continue.
After the channel sent a 100 head, the next HTTP part
must be another head.
@swift-server-bot
Copy link

Can one of the admins verify this patch?

8 similar comments
@swift-server-bot
Copy link

Can one of the admins verify this patch?

@swift-server-bot
Copy link

Can one of the admins verify this patch?

@swift-server-bot
Copy link

Can one of the admins verify this patch?

@swift-server-bot
Copy link

Can one of the admins verify this patch?

@swift-server-bot
Copy link

Can one of the admins verify this patch?

@swift-server-bot
Copy link

Can one of the admins verify this patch?

@swift-server-bot
Copy link

Can one of the admins verify this patch?

@swift-server-bot
Copy link

Can one of the admins verify this patch?

@helje5
Copy link
Contributor Author

helje5 commented Jan 2, 2020

BTW: Creating the PR resulted in 9! emails sent to me w/ "Can one of the admins verify this patch?".

Copy link
Contributor

@Lukasa Lukasa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly looks good, thanks! Do you mind adding a test for this though?

Sources/NIOHTTP1/HTTPServerProtocolErrorHandler.swift Outdated Show resolved Hide resolved
@helje5
Copy link
Contributor Author

helje5 commented Jan 2, 2020

Do you mind adding a test for this though?

I don't have the time nor motivation to write a test for such a trivial thing, maybe someone else is willing to go the extra mile (@fabianfett ?)

Should be easier to read, and makes more sense too :-)
Copy link
Contributor

@Lukasa Lukasa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fab, that change is a lot nicer. I’ll mark this patch as needs help so we can find someone to write a test.

@Lukasa Lukasa added the help wanted Extra attention is needed label Jan 5, 2020
@t089
Copy link
Contributor

t089 commented Feb 5, 2020

Hey, doesn't the same logic apply for other intermediate responses? For example, there is a draft for a 102 Processing status. A server can respond with 102 Processing to indicate that a request takes longer to process. The server can emit multiple 102 Processing until it eventually responds with the final 200 OK when the process completed. Not sure if you would be cool with adding this already to NIO...

@Lukasa
Copy link
Contributor

Lukasa commented Feb 5, 2020

Sure does, the entire 1xx space needs to be covered by this exemption.

@helje5
Copy link
Contributor Author

helje5 commented Feb 5, 2020

Well yes. I don't care too much, but I'm not sure whether you really want to add drafts to such a lib. The draft is probably problematic with clients anyways, I'm sure most HTTP clients are not prepared to receive multiple responses, unless they specifically request it via Expect.

@Lukasa
Copy link
Contributor

Lukasa commented Feb 5, 2020

We shouldn't add draft status codes, but we should add logic that validates that all 1XX status codes are covered here in a sensible way.

@t089
Copy link
Contributor

t089 commented Feb 6, 2020

We shouldn't add draft status codes, but we should add logic that validates that all 1XX status codes are covered here in a sensible way.

I'd be willing to help here. But I'm not sure I fully understand yet. The simplest addition would probably be something like this?

if head.status != .`continue` && head.status != .processing {
  self.nextExpectedOutboundMessage = .bodyOrEnd
}

Or would it be cooler to have a properly named computed property on HTTPStatus, something like isIntermediate? Although this is a bad name I guess, because while 101 SWITCHING PROTOCOLS is also an "intermediate response", the next message will always be .end and not another response.

@Lukasa
Copy link
Contributor

Lukasa commented Feb 6, 2020

I think a .isIntermediate is a good idea (though it should be called .isInformational). It can safely exclude 101, but should include any unknown 1XX codes.

@t089
Copy link
Contributor

t089 commented Feb 6, 2020

I think a .isIntermediate is a good idea (though it should be called .isInformational). It can safely exclude 101, but should include any unknown 1XX codes.

But isn't 101 also a "informational" response? I just wonder if this is good naming. If you would see "isInformational" as a property, wouldn't you expect. "Ah nice, this tells me if this is a 1xx response" - except it doesn't if it is 101. We can of course make isInformational file private to this class, so that its meaning is only relevant for this specific use case.

@Lukasa
Copy link
Contributor

Lukasa commented Feb 6, 2020

I am inclined to propose that the isInformational var be private to the file, as we only want to ask that question here for now.

@helje5
Copy link
Contributor Author

helje5 commented Feb 6, 2020

API layers on top may also need to know this, I wouldn’t make the logic private.

@Lukasa
Copy link
Contributor

Lukasa commented Feb 6, 2020

My gut feeling is that step 1 should be to implement it privately, rather than optimistically make it public because we believe it may be useful. It also solves our naming problem. 😉

@t089
Copy link
Contributor

t089 commented Feb 6, 2020

How about isInformational { self.code in 100..<200 } and in the if status.isInformational && status != .upgrade. So the property is generic and does what is says, and the if statement spells out the exception for upgrade.

@Lukasa
Copy link
Contributor

Lukasa commented Feb 6, 2020

Works for me.

@helje5
Copy link
Contributor Author

helje5 commented Feb 6, 2020

I am inclined to propose that the isInformational var be private to the file, as we only want to ask that question here for now.

It's two files though. So make it internal?

@t089
Copy link
Contributor

t089 commented Feb 26, 2020

Hey, sorry, was a bit difficult to find some time to work on this. I added the isInformational property here: t089:feature/100-continue. I can open this as a PR that supersedes this PR, if you like?

However, I also added two tests to verify the behaviour. One of them fails though and I don't understand why. Seems like I'm missing something. Would appreciate if somebody could take a look :)

@Lukasa
Copy link
Contributor

Lukasa commented Feb 26, 2020

Which test is failing and how?

@t089
Copy link
Contributor

t089 commented Feb 26, 2020

Which test is failing and how?

Sorry, of course.

So this test fails

func testProcessingResponse() throws {
    let group = MultiThreadedEventLoopGroup(numberOfThreads: 1)
    defer {
        XCTAssertNoThrow(try group.syncShutdownGracefully())
    }

     let expectedHeaders = HTTPHeaders()

     let accumulation = ArrayAccumulationHandler<HTTPClientResponsePart> { (parts) in
        guard parts.count == 3 else {
            XCTFail("wrong number of parts: \(parts.count)")
            return
        }

         if case .head(let h) = parts[0] {
            XCTAssertEqual(HTTPVersion(major: 1, minor: 1), h.version)
            XCTAssertEqual(HTTPResponseStatus.processing, h.status)
            XCTAssertEqual(expectedHeaders, h.headers)
        } else {
            XCTFail("unexpected type on index 0 \(parts[0])")
        }

         if case .head(let h) = parts[1] {
            XCTAssertEqual(HTTPVersion(major: 1, minor: 1), h.version)
            XCTAssertEqual(HTTPResponseStatus.ok, h.status)
            XCTAssertEqual(expectedHeaders, h.headers)
        } else {
            XCTFail("unexpected type on index 0 \(parts[0])")
        }

         if case .end(let trailers) = parts[2] {
            XCTAssertEqual(nil, trailers)
        } else {
            XCTFail("unexpected type on index \(parts.count - 1) \(parts[parts.count - 1])")
        }
    }

     let httpHandler = SimpleHTTPServer(.byteBuffer)
    let serverChannel = try assertNoThrowWithValue(ServerBootstrap(group: group)
        .serverChannelOption(ChannelOptions.socket(SocketOptionLevel(SOL_SOCKET), SO_REUSEADDR), value: 1)
        .childChannelInitializer { channel in
            channel.pipeline.configureHTTPServerPipeline(withPipeliningAssistance: false)
                .flatMap {
                 channel.pipeline.addHandler(httpHandler)
                }
        }.bind(host: "127.0.0.1", port: 0).wait())
    defer {
        XCTAssertNoThrow(try serverChannel.syncCloseAcceptingAlreadyClosed())
    }

     let clientChannel = try assertNoThrowWithValue(ClientBootstrap(group: group)
        .channelInitializer { channel in
            ///channel.pipeline.addHandler(DebugInboundEventsHandler()).flatMap {
                channel.pipeline.addHTTPClientHandlers()
            .flatMap {
                channel.pipeline.addHandler(accumulation)
            }
        }
        .connect(to: serverChannel.localAddress!)
        .wait())
    defer {
        XCTAssertNoThrow(try clientChannel.syncCloseAcceptingAlreadyClosed())
    }

     let head = HTTPRequestHead(version: HTTPVersion(major: 1, minor: 1), method: .POST, uri: "/long-task")
    clientChannel.write(NIOAny(HTTPClientRequestPart.head(head)), promise: nil)
    try clientChannel.writeAndFlush(NIOAny(HTTPClientRequestPart.end(nil))).wait()

     accumulation.syncWaitForCompletion()
}

It relies on the fact that the test server sends two responses, one . processing and a subsequent .ok for the same request to /long-task:

case "/long-task":
    let head = HTTPResponseHead(version: req.version, status: .processing)
    let r = HTTPServerResponsePart.head(head)
    context.writeAndFlush(self.wrapOutboundOut(r), promise: nil)
    context.eventLoop.scheduleTask(in: .milliseconds(100)) {
        let head = HTTPResponseHead(version: req.version, status: .ok)
        let r = HTTPServerResponsePart.head(head)
        context.writeAndFlush(self.wrapOutboundOut(r), promise: nil)
            context.writeAndFlush(self.wrapOutboundOut(.end(nil))).recover { error in
                XCTFail("unexpected error \(error)")
                }.whenComplete { (_: Result<Void, Error>) -> () in
                    self.sentEnd = true
                    self.maybeClose(context: context)
                }

So my assumption above is in the test that accumulation contains 3 parts: [.head(.processing), .head(.ok), .end]. But it actually contains [ .head(.processing), .end ] and the ok response is lost.

@Lukasa
Copy link
Contributor

Lukasa commented Feb 26, 2020

The issue is in HTTPDecoder: specifically here. This code does not tolerate informational responses, and consumes the soliciting request. This means the actual response that follows is considered unsolicited.

The current check (which slams together 1XX and 204/304 should be split, and the 1XX should be rewritten in terms of informational responses (as 101 is not). Then the informational check can be done first, before we pop the request off to validate it. The 204/304 check can still come after that pop.

@t089
Copy link
Contributor

t089 commented Feb 28, 2020

Thanks Cory, but I don't think I full understand how this is supposed to work. From your message I inferred something like this:

if statusCode / 100 == 1 && statusCode != 101 {
    skipBody = true
} else {
    guard let method = self.requestHeads.popFirst()?.method else {
        self.richerError = NIOHTTPDecoderError.unsolicitedResponse
        return .error(HPE_UNKNOWN)
    }
    
    if method == .HEAD || method == .CONNECT {
        skipBody = true
    } else if statusCode == 204 || statusCode == 304 {
        skipBody = true
    }
}

This did not help though...

@weissi
Copy link
Member

weissi commented Feb 28, 2020

@t089 What exactly is the problem you're facing with the above code? That looks okay to me.

@t089
Copy link
Contributor

t089 commented Feb 28, 2020

@t089 What exactly is the problem you're facing with the above code? That looks okay to me.

Ah, sorry I was not paying enough attention. The test from above was still failing. But I just looked at it again, and actually it was failing with a different error:

Expected parts: [ .head(.processing), .head(.ok), .end ]
Actual parts: [ .head(.processing), .end, .head(.ok), .end ]

I guess that makes sense. So I'm happy to report that all test are green on my machine™.

Interestingly the .ok response contains transfer-encoding: chunked.

Shall I open a new PR with the changes?

@weissi
Copy link
Member

weissi commented Feb 28, 2020

@t089 awesome! Thank you, yes, a PR would be very welcome.

@t089
Copy link
Contributor

t089 commented Mar 2, 2020

@t089 awesome! Thank you, yes, a PR would be very welcome.

There you go: #1422

@helje5
Copy link
Contributor Author

helje5 commented Jul 31, 2020

I lost track, did we actually get this 100 continue support now? :-)

@Lukasa
Copy link
Contributor

Lukasa commented Jul 31, 2020

Nope, #1422 is still pending.

@Lukasa Lukasa changed the base branch from master to main September 24, 2020 15:10
@weissi
Copy link
Member

weissi commented Mar 31, 2021

pinged #1422

@fabianfett
Copy link
Member

Superseded by #1985

@fabianfett fabianfett closed this Nov 9, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants