-
Notifications
You must be signed in to change notification settings - Fork 719
Ensure HTTPDecoder will not run into re-entrance issues and correctly… #427
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
930731a to
4855599
Compare
Sources/NIOHTTP1/HTTPDecoder.swift
Outdated
| } | ||
|
|
||
| public func handlerRemoved(ctx: ChannelHandlerContext) { | ||
| if let buffer = self.cumulationBuffer?.slice(), buffer.readableBytes > 0, self.parser.upgrade == 1, ctx.channel.isActive { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not && ? Don't hugely mind but usually we use && I think
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, we prefer && over , except when we need to separate conditional bindings.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also this is a really complex conditional, can we write a function to give this conditional a name?
| public func handlerRemoved(ctx: ChannelHandlerContext) { | ||
| if let buffer = self.cumulationBuffer?.slice(), buffer.readableBytes > 0, self.parser.upgrade == 1, ctx.channel.isActive { | ||
| self.cumulationBuffer = nil | ||
| ctx.fireChannelRead(NIOAny(buffer)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this really safe? It really feels like there's a scenario where the next channel hander expects HTTP messages and this gets invoked during a channel close and then the next handler crashes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any better idea ? I think we currently don’t know better
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any better idea ? I think we currently don’t know better
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, don't do this if ctx.channel.isActive is false. That way if handlerRemoved is being invoked while the channel is being torn down you won't forward the data on.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am confused... isn’t that exactly what I am doing here @Lukasa ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I legit didn't even see that. That's part of why I wanted this conditional factored out. ;)
Sources/NIOHTTP1/HTTPDecoder.swift
Outdated
| self.cumulationBuffer = nil | ||
|
|
||
| if self.cumulationBuffer!.readableBytes == 0 { | ||
| // Its safe to just drop the cumulationBuffer as we not have any extra views into it that are represented as readerIndex / length. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
... as we don't have any extra views
weissi
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this looks really good actually! Any performance difference?
|
|
||
| // Ensure we pause the parser after this callback is complete so we can safely callout | ||
| // to the pipeline. | ||
| c_nio_http_parser_pause(parser, 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't we need similar stuff for on_body or so?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ignore me, we've got that
|
@weissi will run perf Tests after dinner |
Sources/NIOHTTP1/HTTPDecoder.swift
Outdated
| } | ||
|
|
||
| public func handlerRemoved(ctx: ChannelHandlerContext) { | ||
| if let buffer = self.cumulationBuffer?.slice(), buffer.readableBytes > 0, self.parser.upgrade == 1, ctx.channel.isActive { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, we prefer && over , except when we need to separate conditional bindings.
Sources/NIOHTTP1/HTTPDecoder.swift
Outdated
| } | ||
|
|
||
| public func handlerRemoved(ctx: ChannelHandlerContext) { | ||
| if let buffer = self.cumulationBuffer?.slice(), buffer.readableBytes > 0, self.parser.upgrade == 1, ctx.channel.isActive { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also this is a really complex conditional, can we write a function to give this conditional a name?
| public func handlerRemoved(ctx: ChannelHandlerContext) { | ||
| if let buffer = self.cumulationBuffer?.slice(), buffer.readableBytes > 0, self.parser.upgrade == 1, ctx.channel.isActive { | ||
| self.cumulationBuffer = nil | ||
| ctx.fireChannelRead(NIOAny(buffer)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, don't do this if ctx.channel.isActive is false. That way if handlerRemoved is being invoked while the channel is being torn down you won't forward the data on.
| /// requests that we choose to punt on it entirely and not allow it. As it happens this is mostly fine: | ||
| /// the odds of someone needing to upgrade midway through the lifetime of a connection are very low. | ||
| public class HTTPServerUpgradeHandler: ChannelInboundHandler { | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we adding this blank line?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will revert, left-over from some debugging Code that was there
| if bufferedMessages.count > 0 { | ||
| ctx.fireChannelReadComplete() | ||
|
|
||
| return { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's no need to heap allocate this, closing over the ChannelHandlerContext. Instead, factor this out to a separate function and just use a state variable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did it this way as otherwise we will need to store multiple things (like request, response headers, upgrader) and I thought this code looks just nicer + it’s not expected to run so frequently. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we could try out luck with @convention(thin) ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Eh, I guess that's fine? It feels super weird to me, but ok. We can't use @convention(thin) because as @normanmaurer notes this actually closes over a bunch of state.
We could encapsulate this in a struct instead, to avoid the heap allocation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am happy with the struct as well I just thought this one is a bit nicer and it should not matter. That said I am fine either way, just tell me what to do :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For now we can allocate the closure, we can always remove it later.
| notUpgrading(ctx: ctx, data: data) | ||
| return | ||
| case .body(_): | ||
| // ignore |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you mark this with a TODO that indicates that in a future NIO version we want to add an API where we deliver this to the upgrader in some form?
| } | ||
| } else { | ||
| // We should only ever see a request header: by the time the body comes in we should | ||
| // be out of the pipeline. Anything else is an error. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment needs rewording as it's no longer accurate. Maybe:
We should decide if we're going to upgrade based on the first request header: if we aren't upgrading, by the time the body comes in we should be out of the pipeline. That means that if we don't think we're upgrading, the only thing we should see is a request head. Anything else in an error.
ae83e9c to
3051625
Compare
|
Alright did run our perf-suite... master: With this pr: |
|
So it seems like this definitely harm the perf a little bit, which I think is not really unexpected with the extra branching etc. |
|
@normanmaurer ouch, raw performance, no networking ( |
|
@weissi yeah... that said I am not sure there is much we can do here (except use a pending list again)... I am open to suggestions tho :) |
Sources/NIOHTTP1/HTTPDecoder.swift
Outdated
| self.state.currentError = HTTPParserError.httpError(fromCHTTPParserErrno: http_errno(rawValue: httpError))! | ||
| ctx.fireErrorCaught(self.state.currentError!) | ||
| // Also take into account that we may have called c_nio_http_parser_pause(...) | ||
| guard httpError != 0 && httpError != HPE_PAUSED.rawValue else { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@weissi I guess it not really matters much for perf stuff but we may want to consider switching these as most of the time it will be HPE_PAUSED in reality so the first test is a waste mostly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reordered this one
|
@normanmaurer is it faster with the callout list? |
|
@weissi I will check in a few need to fix some other stuff first :( |
|
After 61b3149 : So not too bad imho @weissi |
|
@normanmaurer much better, 8%. I'm probably happy enough with this |
|
@weissi you want me to try a list as well still ? |
|
@normanmaurer up to you but I think it could be a follow-up. Because this fixes a correctness issue so I think we should get it in asap |
| guard self.parser.upgrade == 1 && ctx.channel.isActive else { | ||
| return nil | ||
| } | ||
| if let buffer = self.cumulationBuffer?.slice(), buffer.readableBytes > 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Out of interest, why are we slicing the cumulation buffer here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Lukasa I just thought it may be a bit nicer to consume. That said we not need to slice it. Happy to remove if you think we should
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unexplained slicing without a code comment is the kind of thing that we'll be real nervous to change in 6 months time. :D I think either remove it or put a comment explaining why it's there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added a comment ;)
Lukasa
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
WFM.
… forward bytes left after removal. Motivation: We need to ensure we correctly guard against re-entrancy for all cases. Also we did not correctly ensure we forward pending data after removal which could lead to missing data after upgrades. Modifications: - pause the parser when we need to callout to the pipeline and so ensure we never fire events throught the pipeline while in callbacks of http_parser - correctly forward any pending bytes if an upgrade happened when the decoder is removed - Ensure HTTPUpgradeHandler only remove decoder after the full request is received - Add testcase provided by @weissi Result: Correctly handle upgrades and pending data. Fixes apple#422.
61e6841 to
5522789
Compare
|
I squashed, rebased and pushed again |
weissi
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks, looks great!
… forward bytes left after removal.
Motivation:
We need to ensure we correctly guard against re-entrancy for all cases. Also we did not correctly ensure we forward pending data after removal which could lead to missing data after upgrades.
Modifications:
Result:
Correctly handle upgrades and pending data. Fixes #422.