x/net/http2: make Transport return nicer error when Amazon ALB hangs up mid-response? #18639
Comments
To add some color on this, what I'm wondering is if go http clients are expected to handle the GOAWAY frames potentially returned from http2 servers or should the transports somehow manage this case similar to a remotely closed connection. Ideally our program would just re-try the request for a NO_ERROR+GOAWAY scenario. I've asked a related question on SO and golang-nuts. |
You don't provide enough information in this bug report. Are you importing golang.org/x/net/http2 at all, or only using Go's net/http package? This was fixed in December via golang/net@8dab929 Are you perhaps using an old version of golang.org/x/net/http2 directly? What HTTP method are these? If they're not idempotent requests, how are you creating the requests? /cc @tombergan |
@bradfitz We're using the standard net/http package. I miscommunicated the version information. We originally observed this using go 1.7 but since then I was able to reproduce it using go1.8 beta1. Yesterday I upgraded to go 1.8rc1 but I'm not certain if I rebuilt the test case. I can retry the test case with 1.8rc1 if that includes golang/net@8dab929. The call stack is roughly:
|
I was just able to observe the same error using a test binary built with go1.8 rc1. |
Can you capture the stderr output with GODEBUG=http2debug=2 set in your environment when it occurs? How "rough" is that "roughly" call stack? Is it always a GET with no body? |
@bradfitz the NewRequest() line is coped verbatim, the body is always nil. I can collect the stderr output later today or tomorrow. |
And where do you see |
@bradfitz that is the error returned (and logged by us) after a specific call to http.Client.Do(). |
@bradfitz below is stderr with GODEBUG=http2debug=2. I removed some sensitive data, replaced with 'XXX'. Hopefully those changes won't interfere.
|
@tombergan, I have a theory. Can you sanity check me? Note that the error he's getting from RoundTrip is:
That comes only from But My theory is that when we get the initial GOAWAY and decide to retry the request on a new conn, we're forgetting to remove those streams from the originally-chosen Plausible? |
You may have described a bug, but I don't think that's happening in this scenario. I see only two requests in the connection: stream=1 and stream=3. This is the second request:
Note that the GOAWAY frame comes just after sending request HEADERS on stream=3. Note also that the GOAWAY frame has LastStreamID=3. We do not close stream=3 because the server claims they might process it. We optimistically assume that we will get a response. We receive response HEADERS on stream=3 after the GOAWAY (6 seconds after the GOAWAY, in fact). We then receive a sequence of DATA frames. However, we never get END_STREAM. Instead, the server closes the connection. Their server is behaving legally ... I just double-checked the spec, and it's quite clear that the server is not required to process streams with id < GOAWAY.LastStreamId. The server MAY process those streams only partially. There's not much we can do. We cannot retry the request after the connection closes because we've already received response headers and returned from RoundTrip. I have only two ideas, neither perfect:
|
He said that RoundTrip returned an error. If this trace is really about streamid 3 yet we saw this:
... then RoundTrip should've returned something (END_HEADERS is set), and it's the Body.Read that would've returned the GOAWAY error. So I still don't see a clear picture of what happened. |
I am stumped. I looked for suspicious uses of @bfallik, can you triple-check that the error is coming from +buf := make([]byte, 16<<10)
+buf = buf[:runtime.Stack(buf, true)]
+cc.vlogf("stack after run()\n:%s", string(buf))
err = GoAwayError{ It would also help to add a vlogs in // We'd get here if we canceled a request while the
// server had its response still in flight. So if this
// was just something we canceled, ignore it.
+cc.vlog("processHeaders could not find stream with id %d", f.StreamID)
return nil and this return nil // (nil, nil) special case. See handleResponse docs.
+cc.vlog("processHeaders got (nil, nil) from handleResponse")
return nil After making the above changes, could you run the program until it fails with that same "server sent goaway" error, then copy the stack dump here? Thanks! It is fine to remove the call frames running your code if you need to keep those private. |
@tombergan Hi. I don't have the rest of the logging you requested but I do believe that the error is propagated from Response.Body.Read() and not http.Client.Do(). I slightly modified my test harness to split apart those operations and the stack trace clearly originates from:
Do you still need that other debugging information or is there different info I can provide? Also, just to be clear, this is the code I used to reproduce the error:
I think this code is missing a |
Well, that changes everything :) We're back to my earlier comment.
No, I don't think that's related. I think the sequence of events is described by my linked comment above. |
So there's no bug here, then. Except maybe in the http2 server. Which server is this? The linked SO post suggests it's AWS? Is that a new implementation? Do we have any contacts there? |
@jeffbarr, it seems the AWS ALB's new HTTP/2 support is behaving oddly, not finishing streams after sending a GOAWAY. Who's the right person on the AWS side to look into this? Thanks! |
@bradfitz yes, the server is AWS Application Load Balancer. |
@bradfitz assuming there's no bug client side do you recommend we explicitly catch the GOAWAY+NO_ERROR error and retry the request within our application logic? I'm still learning about http2 and so wasn't sure the expected behavior but now that seems like our only/best workaround until the server can be fixed. |
@bfallik, you could I suppose. I wouldn't make it specific to that error, though. Even though ALB shouldn't do that, you could just treat it as any other type of network error or interrupted request and retry N times, especially if it's just a GET request. In this case it's too late for the Go http client to retry since the headers have already been returned. If we did retry, it's possible the headers would be different, so we couldn't stitch the second response's body together with the first response's headers. Ideally ALB would be fixed, though. |
@bradfitz OK, thanks. |
Does this still reproduce the issue if you remember to |
@jbardin yes, sadly. I was hoping your suggestion would expose the bug. |
i updated the program to include the defer. |
@bfallik, can you email me privately with a URL & token I can use so I don't need to spend the time & learn how to set up ALB? |
@bradfitz sure |
Same thing: AWS replies with GOAWAY LastStreamID=13 and then sends the HEADERS and part of the DATA for Stream 13, but then closes the TCP connection before sending any END_STREAM bit on a StreamID 13 frame:
You had mentioned in email that this happens predictably when you register a new instance in ALB. If they're killing your instance when you do so, they're only giving your old instance 2 seconds to shut down gracefully. Or maybe that's a configuration knob? |
Checking back in after a long absence but AWS finally responded to the support ticket I raised and the public message board post: https://forums.aws.amazon.com/thread.jspa?messageID=771883#771883. They still contend that their implementation is spec-compliant and, by implication, Go's is not, though they haven't provided any details or answers to the questions raised in this ticket. |
I believe the Go and AWS implementations are both spec compliant. From my earlier comment:
To quote the spec more explicitly:
I don't know why the server waits 6 seconds to send HEADERS after sending the GOAWAY. There may be something suboptimal happening in AWS, I don't have enough information to say. Popping up a level, it's always possible for requests to fail. It's just a fact of life. This is a GET request, so the request is idempotent. Can't you solve this problem by retrying the request at a higher level (hopefully with a retry limit and/or exponential backoff)? If that doesn't work because the request fails deterministically, then it sounds like there's either a bug in AWS or a bug in your usage of AWS, but in either case, it's not a Go bug. |
Based on JonZ's post on the amazon forum thread, it sounds like your backend server is taking too long to generate the response body, which causes the AWS load balancer to consider the connection idle then shut down the connection with GOAWAY. This sounds like an intentional policy decision of the ALB. So, I think the bug is that your backend server takes too long to generate the response body. Leaving this open in case @bradfitz wants to repurpose this bug to improve the error message.
|
@tombergan thanks for clarifying. I misunderstood comments #18639 (comment) and #18639 (comment), which I thought implied that the ALB was not behaving to spec. Yes, we can retry the request within our app. One question I had was if this retry should really occur within the http package. My reasoning being that http2 is behaving differently from http and my expectation was that it was a drop-in replacement. But it seems based on yours and @bradfitz 's feedback that this is the right solution either (see #18639 (comment)). We'll consider the issue from our side now, either by trying to speed up the HTTP responses or by implementing retries. Happy to leave this issue open if the error wording can be improved. |
fwiw, we see the same behavior on iOS with ALB. linked to this issue in our support ticket and got the same response |
We are seeing this error from some versions of kubectl (the kubernetes cli client, written in go, of course) since moving our api servers behind an ALB except that we are getting ENHANCE_YOUR_CALM.
|
@SleepyBrett, tune your ALB I guess? Or do you want a different error message on Go's side? |
Tune it how? This thread seems to come down to a bunch of finger pointing and no movement, aws claims you are broken, you claim they are broken, I don't see a single comment that claims some magical tuning method for the alb. The fact that it works for some of our users and not others (consistently per client, I'm gathering information to see if I can correlate it to kubectl versions and/or the version of go that was used to compile it ) seems to be pointing to some kind of movement inside this library or potentially in how kubectl uses this library. |
@SleepyBrett, I figured ALB would have some parameters for you to tell it to wait longer before closing client connections. Looking at the traces above, Amazon is closing its TCP connections before finishing writing the responses. If you want to send a trace, we can see if it's the same story. |
Hey so we might be onto a new theory that could explain some of the conflicting results I'm see from my user survey. A few users went they sent me their |
We've seen this "goaway" error quite commonly from our golang http/2 clients against our ALB backends and paid it no mind thinking it was just a "do a retry" error. But we noticed more recently that it seems like a deeper problem, especially now go's http client handles the retry cases. It seems to be as @bradfitz suggests, the ALB is correctly sending a GOAWAY, and the go http client is noticing, but then the ALB closes the connection before closing the in-flight http/2 stream by omitting an end_stream flag on the final frame — an "unexpected connection close" error. It also looks like the stream is completed, and the upstream has fully processed a request and response, it's just omitting the flag in the http/2 stream. This breaks the expected contract of a load balancer imho. I've chimed in on the aws forums issue already linked. @bradfitz perhaps in this case the go http client needs to return an "unexpected connection close before stream end" error or something, instead of blaming the goaway? |
we are experiencing same issue with aws elb -->nginx --> golang service under kubernetes. |
http2 is allowed to tell us to go away, and for watch it is safe to exit and restart in almost all cases where a connection is forcibly closed by the upstream. This error message happens a lot behind ELB and other http2 aware proxies. Treat the error as "basically done" as suggested by golang/go#18639 (comment)
http2 is allowed to tell us to go away, and for watch it is safe to exit and restart in almost all cases where a connection is forcibly closed by the upstream. This error message happens a lot behind ELB and other http2 aware proxies. Treat the error as "basically done" as suggested by golang/go#18639 (comment) Kubernetes-commit: 640caeb74f2bc5e93b1579a10d6d04a07863379f
Is there anything at all that can be done here? Amazon ALB isn't going away, and if it is technically spec-conforming (even if we view its behavior as sub-optimal), it would be nice for golang not to throw errors. |
@froodian, what do you want me to do? |
Is it safe to simply retry the request? Or no because it may have gone through and may not be idempotent? I suspect that the way I'll silence this at my layer is to retry any error where the message contains both "server sent GOAWAY and closed the connection" and "ErrCode=NO_ERROR" |
@froodian, nope, what's happening is:
If ALB hadn't sent a response header then we could retry the request, but it's pretty weird for us to retry the request when we've already given the response headers to the user code. The only safe thing to do is retry the request and hope for exactly the "same" response headers and only if they "match", then continue acting like the original res.Body (which the Go user code is already reading from) is part of the second retried request. But things like the server's Date header probably changed, so that at least needs to be ignored. What else? What if ALB had already returned some of the response body bytes, but not all? Do we need to keep a checksum of bytes read and stitch together the two bodies if the body is same length and second response response's prefix bytes have the same checksum? That would all be super sketchy. It's better to just return an error, which we do. If the caller wants to retry, they can retry. Do you just want a better error message? What text would sound good to you? |
I see, yeah, thank you for that condensed write-up @bradfitz... that makes sense, and I agree it's correct from a client perspective not to retry when we've already been given response headers... I agree with @sj26 that a message like "unexpected connection close before stream end" or something long those lines might help indicate the problem a little more clearly. But at a more root level, I also wonder if we could lean more heavily on AWS as a group to change their behavior - I agree it really seems like they should let that last response write out its whole body to EOF before they close the TCP connection... but I guess they want to have timeouts on their end too so that if the server's app code never EOFs the response body for some reason, they still clean up the TCP connection at some point, hence their current behavior...? I guess https://forums.aws.amazon.com/thread.jspa?messageID=771883#771883 appears to be the most recent public thread with AWS about this? But I also wonder if other conversations have gone on behind the scenes. |
I searched in our logs for GOAWAY and found a couple of thousand hits with the following message: All hits have ErrCode=NO_ERROR afaict. This seems harmless, can this be an info message instead of error? |
Please answer these questions before submitting your issue. Thanks!
What version of Go are you using (
go version
)?$ go version
go version go1.8rc1 darwin/amd64
What operating system and processor architecture are you using (
go env
)?Linux AMD64
What did you do?
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
We have http client code that has started to return errors when the corresponding server uses HTTP2 instead of HTTP.
What did you expect to see?
Identical behavior.
What did you see instead?
http2: server sent GOAWAY and closed the connection; LastStreamID=1, ErrCode=NO_ERROR, debug=""
The text was updated successfully, but these errors were encountered: