New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix http2 header parsing #454
Conversation
Looking at this further it seems that |
Should replace fix for yesodweb#454
Fixing |
Section 8.1.1.2 of RFC 7540 says:
So, that is a bug of browsers. Which browsers generate such broken headers? We should try to fix them first. |
On further investigation, it's actually warp itself generating the headers in the wrong order. See |
Ah. OK. Thank you for letting me know this. I will fix. |
@dylex Is the patch above fix all your problems? If so, I will release new Warp. |
The patch I worte for this PR 4c941c5, which just added the cookie header at the end rather than the beginning, fixed this problem. If you want to test something else the problem always happens with any http2 request with a cookie. If you're going to do another release I'd appreciate it if you could apply 165b4aa or something like it as well for #450 so we can stick with more stable and tested code. |
Should replace fix for yesodweb#454
Thank you for your patch. Two patches above seem good. However, I was confused on how to merge them. I made the |
@kazu-yamamoto That looks good, thanks. Just one more on top of that, 0011218 should fix the test results. Sorry, I guess it wasn't clear that I updated the patch in the original PR. @awpr In a pure-http/1.1 server, if the client issues a PRI request, won't the server just respond with a 400 or 405? Isn't that what will happen here (unless the application actually handles this method), i.e., the same as the behavior in warp 3.0 (which is my goal with this)? Or is it indicating somehow earlier that http/2 should be supported and PRI will be accepted? Sorry both of these patches got lumped together. |
The HTTP/2 spec says that the connection prefix is specifically crafted to confuse HTTP/1.* servers and make them close the connection as early as possible, so I would expect it's not a valid request. I guess it's not harmful to pass it to the HTTP/1.1 code and let it fail, but it seems kind of pointless. In the case of ALPN negotiating "h2-*" even when we think it's disabled, arbitrarily bad things could (in theory) happen since there's not a guaranteed-unparseable connection prefix in that case. The right thing to do might actually be to send a GOAWAY frame, since h2==True means the client is definitely expecting to be served HTTP/2 ("h2" was negotiated or it sent the HTTP/2 connection prefix). |
My thought was disabling http2 should make it behave exactly like warp 3.0 did, like a non-http2-aware server, for backwards compatibility. In that case, if the user manually specified an ALPN response, you'd still potentially get this confusion (just as you would if the user's hook claimed to support some other protocol entirely), but it's clearly the user's fault/intention. On the other hand, if disabling http2 should be more like "we still understand h2 but refuse to speak it" then I suppose GOAWAY makes sense. Either way, as long as browsers cleanly fall back to http1, it's fine with me. |
Okay, I'm convinced it's probably fine either way. It's still a little odd to go on serving HTTP/1 if the user forces ALPN to accept HTTP/2, but that's a weird case anyway and clearly user error. |
OK. A new version has been released. @dylex Thank you for your contributions! |
Thanks! Could you release a new warp-tls, too, if you get a chance. |
Ah, sorry about that. I have released a new version of warp-tls. |
We've been having consistent failures with http2 across multiple browsers and platforms for months (see #450), which turned out primarily to happen when there were cookies in the request. Looking at the
HeaderList
being parsed tovalidateHeaders
I see something like:[("cookie","session=..."),(":method","GET"),(":path","/"),(":authority","localhost"),(":scheme","https"),("user-agent","Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Firefox/38.0"),("accept","text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"),("accept-language","en-US,en;q=0.5"),("accept-encoding","gzip, deflate")]
But
validateHeaders
doesn't allow pseudo-headers to be parsed after normal headers. I'm not quite sure why header parsing was split up this way, nor have I looked into the http2 spec or where theHeaderList
comes from, but this is clearly happening in practice. Allowingnormal
to chain back intopseudo
fixes this, at least for one browser so far. (The two functions could obviously be merged and cleaned up but this was the simplest fix.)