New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add channel handler for server side pipelining. #62

Merged
merged 2 commits into from Mar 13, 2018

Conversation

Projects
None yet
4 participants
@Lukasa
Copy link
Contributor

Lukasa commented Mar 2, 2018

Motivation:

HTTP pipelining can be tricky to handle properly on the server side.
In particular, it's very easy to write out of order or inconsistently
mutate state. Users often need help to handle this appropriately.

Modifications:

Added a HTTPServerPipelineHandler that only lets one request through
at a time.

Result:

Better servers that are more able to handle HTTP pipelining

@Lukasa Lukasa requested review from normanmaurer and weissi Mar 2, 2018

import NIO

/// A `ChannelHandler` that handles HTTP pipelining by buffering inbound data until a

This comment has been minimized.

@normanmaurer

normanmaurer Mar 2, 2018

Member

@Lukasa maybe we want to add a link to: https://tools.ietf.org/html/rfc7230#section-6.3.2 just to add a bit more context ?

/// mutation of any shared server state is not parallelised, and that responses are always
/// sent for each request in turn. In almost all cases this is the behaviour that a
/// pipeline will want.
public class HTTPServerPipelineHandler: ChannelInboundHandler, ChannelOutboundHandler {

This comment has been minimized.

@normanmaurer

normanmaurer Mar 2, 2018

Member

nit: final ?

}

public func write(ctx: ChannelHandlerContext, data: NIOAny, promise: EventLoopPromise<Void>?) {
precondition(self.responsePending, "Unexpected data flow")

This comment has been minimized.

@normanmaurer

normanmaurer Mar 2, 2018

Member

@Lukasa should this be a assert ?

private var readPending = false

/// The buffered HTTP requests that are not going to be addressed yet.
private var requestBuffer: [NIOAny] = []

This comment has been minimized.

@normanmaurer

normanmaurer Mar 2, 2018

Member

I wonder if we want to use a CircularBuffer here ?. This sounds like a better fit in terms of overhead. WDYT ?

let reads = self.requestBuffer
self.requestBuffer = []

reads.forEach { self.channelRead(ctx: ctx, data: $0) }

This comment has been minimized.

@normanmaurer

normanmaurer Mar 2, 2018

Member

See above ... If we switch to CircularBuffer we can just empty it and there is nothing left.

@normanmaurer

This comment has been minimized.

Copy link
Member

normanmaurer commented Mar 2, 2018

@Lukasa also I think we want to use this handler in our HTTP1 example

@Lukasa Lukasa force-pushed the Lukasa:cb-pipelining-handler branch 2 times, most recently from d1b9cbc to 3e9d30f Mar 2, 2018

@Lukasa

This comment has been minimized.

Copy link
Contributor Author

Lukasa commented Mar 2, 2018

Cool, updated. Time to rereview!

@@ -399,6 +399,8 @@ let bootstrap = ServerBootstrap(group: group)
// Set the handlers that are applied to the accepted Channels
.childChannelInitializer { channel in
channel.pipeline.addHTTPServerHandlers().then {
channel.pipeline.add(handler: HTTPServerPipelineHandler())

This comment has been minimized.

@weissi

weissi Mar 2, 2018

Contributor

I think addHTTPServerHandlers should have some parameter of whether to insert HTTPServerPipelineHandler automatically

This comment has been minimized.

@Lukasa

Lukasa Mar 2, 2018

Author Contributor

Do you mind if we open an issue to track this work and I deal with it in another PR? Doing it well requires a fairly substantial rewrite of that code, and I don't want to make this PR handle two jobs.

This comment has been minimized.

@normanmaurer

normanmaurer Mar 2, 2018

Member

@Lukasa works for me

This comment has been minimized.

@weissi

weissi Mar 3, 2018

Contributor

Works for me too but maybe we should tag 1.1.0 before this PR so we can sort out possible APIs before we then tag 1.2.0 which will contain this PR? But you two decide, I’ll be offline for most of the day and off next week

}

public func write(ctx: ChannelHandlerContext, data: NIOAny, promise: EventLoopPromise<Void>?) {
assert(self.state != .requestEndPending, "Unexpected data flow")

This comment has been minimized.

@weissi

weissi Mar 2, 2018

Contributor

if I see this correctly, this happens when a second response is coming in end we're waiting for the previous request's .end. Could we have a better message here?

@normanmaurer

This comment has been minimized.

Copy link
Member

normanmaurer commented Mar 2, 2018

This will fix #43

///
/// See https://tools.ietf.org/html/rfc7230#section-6.3.2 for more details on safely
/// handling HTTP pipelining.
public final class HTTPServerPipelineHandler: ChannelInboundHandler, ChannelOutboundHandler {

This comment has been minimized.

@normanmaurer

normanmaurer Mar 2, 2018

Member

now that we have ChannelDuplexHandler please use it :)

if deliveredRead {
ctx.fireChannelReadComplete()
}
}

This comment has been minimized.

@normanmaurer

normanmaurer Mar 2, 2018

Member

@Lukasa do we need to do something "smart" for the buffered writes if close(ctx, mode, promise) is called ? Maybe its ok to just drop these on the floor tho

This comment has been minimized.

@Lukasa

Lukasa Mar 2, 2018

Author Contributor

I don't think so. If we got a close there's not really anything smart we can do with them anyway.

This comment has been minimized.

@normanmaurer

normanmaurer Mar 2, 2018

Member

sounds good...

@Lukasa Lukasa force-pushed the Lukasa:cb-pipelining-handler branch from 3e9d30f to b341e7f Mar 2, 2018

@normanmaurer

This comment has been minimized.

Copy link
Member

normanmaurer commented Mar 2, 2018

@Lukasa can you rebase ?

@Lukasa Lukasa force-pushed the Lukasa:cb-pipelining-handler branch from b341e7f to 0ef1052 Mar 3, 2018

@weissi

This comment has been minimized.

Copy link
Contributor

weissi commented Mar 3, 2018

@Lukasa please run

git mv IntegrationTests/tests_01_http/SKIP_test_13_http_pipelining.sh  IntegrationTests/tests_01_http/test_13_http_pipelining.sh

to enable the pipelining tes

@Lukasa Lukasa force-pushed the Lukasa:cb-pipelining-handler branch 2 times, most recently from 071e885 to 7ce6b9e Mar 3, 2018

@apple apple deleted a comment from jordan2936 Mar 3, 2018

@Lukasa

This comment has been minimized.

Copy link
Contributor Author

Lukasa commented Mar 3, 2018

@swift-nio-bot test this please

@Lukasa Lukasa force-pushed the Lukasa:cb-pipelining-handler branch from 7ce6b9e to 33887ce Mar 3, 2018

@Lukasa Lukasa added this to the 1.2.0 milestone Mar 5, 2018

@Lukasa Lukasa force-pushed the Lukasa:cb-pipelining-handler branch from 33887ce to 927a495 Mar 7, 2018

@Lukasa

This comment has been minimized.

Copy link
Contributor Author

Lukasa commented Mar 7, 2018

@swift-nio-bot test this please

@Lukasa

This comment has been minimized.

Copy link
Contributor Author

Lukasa commented Mar 7, 2018

@tomerd Seems like the builder is keeping hold of state from #109 which is causing this build to fail. Want to dive into that?

@tomerd

This comment has been minimized.

Copy link
Member

tomerd commented Mar 8, 2018

@swift-nio-bot test this please

@tomerd

This comment has been minimized.

Copy link
Member

tomerd commented Mar 8, 2018

@tomerd Seems like the builder is keeping hold of state from #109 which is causing this build to fail. Want to dive into that?

@Lukasa this is fixed now, but this is filling an integration test

00:03:49.294 Running test 'test_13_http_pipelining.sh'... FAILURE (1)
@Lukasa

This comment has been minimized.

Copy link
Contributor Author

Lukasa commented Mar 8, 2018

@tomerd Yeah, I suspected it would, thanks. 😄

@Lukasa

This comment has been minimized.

Copy link
Contributor Author

Lukasa commented Mar 8, 2018

So the failure here is because netcat half-closes the TCP connection when it gets EOF, which confuses the crap out of our test server. I'm going to work on that, but it'll take a few patches because the test server doesn't currently handle keep-alive properly either.

@Lukasa Lukasa force-pushed the Lukasa:cb-pipelining-handler branch 3 times, most recently from dd78e06 to 1a05d64 Mar 12, 2018

@Lukasa

This comment has been minimized.

Copy link
Contributor Author

Lukasa commented Mar 13, 2018

Thank goodness, this now works. Please re-review.

@weissi
Copy link
Contributor

weissi left a comment

looks really good. One nit, and @Lukasa did you run perf tests, just to see the impact of this?

}
}

public var reads: [Event] = []

This comment has been minimized.

@weissi

weissi Mar 13, 2018

Contributor

should this really be publicly mutable?

This comment has been minimized.

@Lukasa

Lukasa Mar 13, 2018

Author Contributor

In the tests I don't think it matters.

Nope, didn't run perf tests, will do shortly.

This comment has been minimized.

@weissi

weissi Mar 13, 2018

Contributor

sorry, missed that it was in the tests, please ignore

@Lukasa

This comment has been minimized.

Copy link
Contributor Author

Lukasa commented Mar 13, 2018

Perf results:

Without pipeline handler

wrk -c 1k -d 1m -t 4 --latency http://localhost:8888
Running 1m test @ http://localhost:8888
  4 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    17.19ms    3.15ms  56.53ms   84.12%
    Req/Sec    13.83k   842.71    16.24k    80.12%
  Latency Distribution
     50%   17.69ms
     75%   18.28ms
     90%   19.58ms
     99%   22.45ms
  3302121 requests in 1.00m, 160.61MB read
  Socket errors: connect 0, read 918, write 0, timeout 0
Requests/sec:  55004.64
Transfer/sec:      2.68MB

With the pipelining handler

wrk -c 1k -d 1m -t 4 --latency http://localhost:8888
Running 1m test @ http://localhost:8888
  4 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    16.84ms    4.89ms 168.78ms   86.04%
    Req/Sec    13.57k     1.02k   17.11k    86.04%
  Latency Distribution
     50%   17.90ms
     75%   18.45ms
     90%   19.69ms
     99%   23.24ms
  3240724 requests in 1.00m, 157.62MB read
  Socket errors: connect 0, read 1131, write 0, timeout 0
Requests/sec:  53972.47
Transfer/sec:      2.63MB

With the pipelining handler, and actually pipelining:

wrk -c 1k -d 1m -t 4 --latency -s pipeline.lua http://localhost:8888
Running 1m test @ http://localhost:8888
  4 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   317.96ms  210.25ms   1.99s    62.87%
    Req/Sec    47.34k    14.35k   96.72k    66.08%
  Latency Distribution
     50%  291.35ms
     75%  471.95ms
     90%  626.93ms
     99%  824.32ms
  11300061 requests in 1.00m, 549.61MB read
  Socket errors: connect 0, read 2381, write 70, timeout 467
Requests/sec: 188102.51
Transfer/sec:      9.15MB

The TL;DR here is that the pipelining handler slows the server down in the non-pipelined mode by a worst-case of about 2% (these tests do no meaningful CPU work in the HTTP server itself, so the performance delta here basically is the worst case).

I'd say that's a small price to pay for correctness. ;)

(BTW, the read errors here are probably because this server closes non-keepalive connections, and that the way it does so could easily lead to TCP RST rather than EOF. We should address that in a future patch.)

@weissi

This comment has been minimized.

Copy link
Contributor

weissi commented Mar 13, 2018

@Lukasa the perf results look awesome 👌

@normanmaurer

This comment has been minimized.

Copy link
Member

normanmaurer commented Mar 13, 2018

@Lukasa awesome! I will do a last review pass later today

@normanmaurer

This comment has been minimized.

Copy link
Member

normanmaurer commented Mar 13, 2018

Also the 2% may just be the effect of having another handler in the pipeline in general

/// - returns: An `EventLoopFuture` that will fire when the pipeline is configured.
public func addHTTPServerHandlers(first: Bool = false) -> EventLoopFuture<Void> {
public func addHTTPServerHandlers(first: Bool = false, supportPipelining: Bool = false) -> EventLoopFuture<Void> {

This comment has been minimized.

@normanmaurer

normanmaurer Mar 13, 2018

Member

@Lukasa @weissi I guess this change is ok as we have a default param, correct ?

This comment has been minimized.

@weissi

weissi Mar 13, 2018

Contributor

@normanmaurer yes 👍

@normanmaurer
Copy link
Member

normanmaurer left a comment

Generally looking awesome to me... Just a few small suggestions. After these are addressed LGTM

}
}

private final class SuppressingReadsHandler: ChannelOutboundHandler {

This comment has been minimized.

@normanmaurer

normanmaurer Mar 13, 2018

Member

IMHO the name is confusing as it really does not suppress reads but just count these and forward the reads to the next handler. Maybe call it ReadCountHandler ?

[.read(HTTPServerRequestPart.head(self.requestHead)),
.read(HTTPServerRequestPart.end(nil)),
.read(HTTPServerRequestPart.head(self.requestHead)),
.read(HTTPServerRequestPart.end(nil)), .halfClose])

This comment has been minimized.

@normanmaurer

normanmaurer Mar 13, 2018

Member

remove whitespaces in front of .halfClose

/// generate a response.
private enum BufferedEvent {
/// A channelRead event.
case read(NIOAny)

This comment has been minimized.

@normanmaurer

normanmaurer Mar 13, 2018

Member

maybe call it channelRead to ensure this is not confused with read method in ChannelOutboundHandler. Just a suggestion tho

This comment has been minimized.

@normanmaurer

normanmaurer Mar 13, 2018

Member

@Lukasa did you see this and decide that its ok as it is ? Thats fine, just wanted to ensure you not missed it

This comment has been minimized.

@Lukasa

Lukasa Mar 13, 2018

Author Contributor

Oh lol, I only did it for the other enum. Fixed.

/// an enormous amount of data down the `Channel` while a server is processing a
/// slow response.
///
/// See https://tools.ietf.org/html/rfc7230#section-6.3.2 for more details on safely

This comment has been minimized.

@normanmaurer

@Lukasa Lukasa force-pushed the Lukasa:cb-pipelining-handler branch from 1a05d64 to f67633b Mar 13, 2018

@Lukasa

This comment has been minimized.

Copy link
Contributor Author

Lukasa commented Mar 13, 2018

Ok, feedback applied.

Add channel handler for server side pipelining.
Motivation:

HTTP pipelining can be tricky to handle properly on the server side.
In particular, it's very easy to write out of order or inconsistently
mutate state. Users often need help to handle this appropriately.

Modifications:

Added a HTTPServerPipelineHandler that only lets one request through
at a time.

Result:

Better servers that are more able to handle HTTP pipelining

@Lukasa Lukasa force-pushed the Lukasa:cb-pipelining-handler branch from f67633b to 8891819 Mar 13, 2018

@normanmaurer
Copy link
Member

normanmaurer left a comment

Great work!

@weissi

weissi approved these changes Mar 13, 2018

Copy link
Contributor

weissi left a comment

awesome work, ta

@weissi weissi merged commit 860a7d4 into apple:master Mar 13, 2018

1 check passed

pull request validation Build finished.
Details

@Lukasa Lukasa deleted the Lukasa:cb-pipelining-handler branch Mar 13, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment