New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
net/http: support concurrent Request.Body reads & ResponseWriter.Write calls in HTTP/1.x server #15527
Comments
I've sent https://golang.org/cl/23011 to doucment the status quo. We can keep this bug open to track making the HTTP/1.x server support concurrent reads & writes like HTTP/2. |
CL https://golang.org/cl/23011 mentions this issue. |
Summary: Go's HTTP/1.x server closes the request body once writes are flushed. Go's HTTP/2 server supports concurrent read & write. Added a TODO to make the HTTP/1.x server also support concurrent read+write. But for now, document it. Updates #15527 Change-Id: I81f7354923d37bfc1632629679c75c06a62bb584 Reviewed-on: https://go-review.googlesource.com/23011 Reviewed-by: Andrew Gerrand <adg@golang.org>
@dpiddy points out the closest thing in RFC 2616 about this topic:
|
I believe this is a critical issue affecting many users out there and should be addressed as quickly as possible. I was able to reproduce this issue using Transfer-Encoding: chunked as I explained in this related issue: This sample server-client pair reproduces this issue: Please let me know if any assistance is needed to fix this issue for the next release. Thanks! |
Relevant HTTP WG email thread: https://lists.w3.org/Archives/Public/ietf-http-wg/2004JanMar/0041.html TL;DR: Yes, you can respond before reading the entire request.
This section should be interpreted exactly as written. The server can't close the connection before reading the entire body of the request for the reason stated in that section. That doesn't mean the server can't respond concurrently. |
@Stebalien, thanks for finding that. FWIW, Go's HTTP/1.x server originally permitted this but we encountered enough confused implementations in the wild that would end up deadlocking if they got a response before they were expecting it. (e.g. somebody sent us a POST + body but the Go server sent an Unauthorized response before reading the body). The peer would then deadlock, never reading the response because it wasn't finished writing its body, and we weren't reading their body because our Handler was done running. We've changed behavior a few times over the years (read to EOF, read some, close immediately) and I can't even remember what we do now. Still worth revisiting, but I suspect we might need to make this behavior opt-in for HTTP/1.x on a per-Handler basis somehow. |
@Stebalien @bradfitz I understand, permitting this can cause issues like you mentioned but it's already there for HTTP/2, so IMHO it should be added to HTTP/1.x at least as an opt-in feature. |
I understand this is tricky. However, looking through the code and some of the related conversations, it's clear that nobody has all the context (not unusual for complex code like this). Incorrect assumptions about both the current implementation and the spec seem to be quite prevalent so I'm trying to correct those assumptions. While I'm not asking for an immediate change, simply letting this bug age isn't going to get us anywhere either. |
@Stebalien @bradfitz Got here via caddyserver/caddy#3557. I believe this bug is related to the one I linked to. I can't help with the bug - could never bring myself to invest enough time to learn go properly. But I'll try to add some perspective on it. We run several distinct proxies/embedded web servers in our system - envoy, traefik, tomcat (different versions), netty, node's built-in server etc. in addition to caddy. Caddy is the only Go-based one - and the only one exhibiting the problem of prematurely closing connections to clients when the full response is sent out before the full request is received. We also run various clients, and none gets confused if it receives the full response before finishing sending the request. We provide some APIs, of which we don't control all. Some occasionally send huge requests and responses. Having to fully buffer requests and responses all the time adds significant latency, when those are large. The definite case where you absolutely want concurrent response and request, in a proxy or web server, is 503. If you limit the number of simultaneous in-flight requests, in order to prevent a server from crashing from memory exhaustion, and respond 503 whenever the server reaches the limit of requests it is allowed to process concurrently, forcing it to still receive the full request before responding defeats the purpose of 503. You keep the server busy when all it wants to do is to tell it's already busy enough already, and won't process your request. The added delay also prevents load balancers from retrying early. This makes workarounds damaging in themselves - if a Go-based proxy is in-between, there's no way around fully buffering the request and response before sending them through the Go proxy. |
I also encountered this problem
func myecho(w http.ResponseWriter, r *http.Request) {
n, err := io.Copy(w, r.Body)
if err != nil {
fmt.Printf("err = %v\n", err)
//panic(err)
}
fmt.Printf("n = %d\n", n)
}
func netHTTPBug() {
http.HandleFunc("/post", myecho)
http.ListenAndServe(":8080", nil)
}
seq 600 &>need.data
wc -c need.data
2292 need.data
curl -X POST -d @./test.data 127.0.0.1:8080/post &>got.data
wc -c got.data
829 got.data I looked at the net/http code yesterday, and it seemed that it was enough to modify a bool variable. Is there any troublesome reason not to modify it? |
@guonaihong which bool variable are you referring to? I am running into the issue and looking for a solution to resolve it. |
Here you can see the code in my screenshot. I remember setting discard to true to run my demo above. |
This bug has been open for over six years, and is causing a great deal of pain for our customers. Are there any golang maintainers we can work with on trying to get this problem solved finally? Any chance you might be the right person for this @neild, or at least know someone who can help? |
Change https://go.dev/cl/472636 mentions this issue: |
Add support for concurrently reading from an HTTP/1 request body while writing the response. Normally, the HTTP/1 server automatically consumes any remaining request body before starting to write a response, to avoid deadlocking clients which attempt to write a complete request before reading the response. Add a ResponseController.EnableFullDuplex method which disables this behavior. For #15527 For #57786 Change-Id: Ie7ee8267d8333e9b32b82b9b84d4ad28ab8edf01 Reviewed-on: https://go-review.googlesource.com/c/go/+/472636 TryBot-Result: Gopher Robot <gobot@golang.org> Run-TryBot: Damien Neil <dneil@google.com> Reviewed-by: Roland Shoemaker <roland@golang.org>
Change https://go.dev/cl/501300 mentions this issue: |
For #15527 For #57786 Change-Id: I75ed0b4bac8e31fac2afef17dad708dc9a3d74e1 Reviewed-on: https://go-review.googlesource.com/c/go/+/501300 Run-TryBot: Damien Neil <dneil@google.com> Auto-Submit: Damien Neil <dneil@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@google.com>
Please answer these questions before submitting your issue. Thanks!
go version
)?Go 1.5.2
go env
)?windows_amd64
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
https://play.golang.org/p/DaWZXCQNfV
this example ends up with error "http: invalid Read on closed Body"
https://play.golang.org/p/WYCsIQzx_F
but changing 100 to 10 in strings.Repeat goes without any errors
also this example https://play.golang.org/p/YxjKnmgfGP
$ curl -d 'qweqweq weq weqwe qwe qew qwe' http://localhost:6060/
<qweqweq weq weq><we qwe qew qwe >
shows that reading and writing goes simultaneously and without errors
i expect to see the same error in both cases
i see different behaviour
The text was updated successfully, but these errors were encountered: