New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
net/http: no way of manipulating timeouts in Handler #16100
Comments
Given that we store the |
I think this can be done with providing a custom That is embedding a The overwritten Since a http/2 connection can do multiplexing, it would be helpful to have a set of timeouts for individual stream. When a stream hang on client, we could use those timeouts to release resources associated with that stream, this is not possible with setting deadlines on the lower level underlying connection. |
@FiloSottile, we won't be exposing net.Conn to handlers, or let users explicitly set deadlines on conns. All public APIs need to consider both HTTP/1 and HTTP/2. You propose many solutions, but I'd like to get a clear statement of the problem first. Even the title of this bug seems like a description of a lack of solution, rather than a problem that's not solvable. I agree that the WriteTimeout is ill-specified. See also my comment about ReadTimeout here: #16958 (comment) You allude to your original problem here:
So you have an infinite stream, and you want to forcibly abort that stream if the user isn't reading fast enough? In HTTP/1, that means closing the TCP connection. In HTTP/2, that means sending a RST_STREAM. I guess we need to define WriteTimeout before we make progress on this bug. What do you think WriteTimeout should mean? |
Why not provide a timeout version of Read/Write ? Is there any reason ?
My concrete case is: case1 about readclient send data via. a POST API provided by server, but if the network disconnected during the server read request data via. case2 about writesimilar that it's server write data back to client. So I need a timeout mechanism that server can be unblocked from this Read/Write. Thanks. |
What if the mechanism to abort blocked read or write I/O is to just exit from the ServeHTTP function? So then handlers can do their own timers before reads/writes and cancel or extend them as necessary. That implies that all such reads & writes would need to be done in a separate goroutine. And then if they're too slow and the timer fires before the hander's I/O goroutine completes, just return from ServeHTTP and we'd either kill the TCP connection (for http1) or do a RST_STREAM (for http2). I suppose then the problem is there's a small window where the I/O might've completed just in the window between the timer firing and the ServeHTTP code exiting, so then once the http package sees ServeHTTP is done, it may not see any calls still in Read or Write, and might then not fail the connection as intended. That suggests we need some sort of explicit means of aborting a response. http1 has that with Hijacker. I've recommended in the past that people just panic, since the http package has that weird feature where it recovers panics. Would that work? Do all your I/O in separate goroutines, and panic if they take too long? We might get data races with http1 and/or http2 with that, but I could probably make it work if you like the idea. |
CL https://golang.org/cl/32024 mentions this issue. |
…meout Updates #14204 Updates #16450 Updates #16100 Change-Id: Ic283bcec008a8e0bfbcfd8531d30fffe71052531 Reviewed-on: https://go-review.googlesource.com/32024 Reviewed-by: Tom Bergan <tombergan@google.com> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
Deferring to Go 1.9 due to lack of reply. |
@bradfitz I faced a similar problem when I started with Go. I had a service in Node using HTTP 1.1 with chunked transfer encoding, it implemented bidirectional streaming over HTTP 1.1. On each write or read, the deadline/timeout would get reset in order for the connection to remain opened. It worked on Node.js which uses libuv. This predated websockets and avoided long polling. It is a little known technique but is somewhat mentioned in https://tools.ietf.org/rfc/rfc6202.txt (numeral 3). I also remember Twitter's streaming API using it. Anyway, migrating my Node service to Go was not possible because of the use of absolute timeouts/deadlines. So, I guess @FiloSottile is referring to a similar case. At least in my particular scenario, what I wanted to do was to be able to reset the connection's write and read timeout/deadline so that the connection remained open but still got closed if it became idle. |
A timeout that restart on each write or read would be useful for my use case too and if implemented I could rewrite in go a legacy c++ application. If the timeout could be set per request and not only globally would be a good plus |
@bradfitz Sorry for not following up on this one, it turned out to be more nuanced than I thought (HTTP/2, brain, HTTP/2 exists) and didn't have the time to look at it further. Doing I/O in a goroutine instead of the timer sounds upside-down, but I don't have specific points to make against it. What would make more sense to me would be a way to cancel the whole thing, which would make Body.Read/ResponseWriter.Write return an error that would then be handled normally. Question, without such a mechanism, how is the user supposed to timeout a read when using
|
One possible solution would be for Request.Body to be documented to allow Close concurrent with Read, and not to panic on double Close, and for ResponseWriter to be upgradeable to io.Closer. |
For what it's worth ... I think we have a related use case. (only for reading requests and not for writing) We have some slow clients doing large PUT requests - sometime on unstable connections which dies. Currently, though we provide a net.Listener/net.Conn which sets Deadline on every Read/Write ... but that seems not a viable solution since it would interfere with timeouts set by net/http.Server. |
The problem is defining what is "too long". These demands for IO activity should (for HTTP) only be enforced during ConnState stateActive (and stateNew). It would be OK for a connection to not have any IO activity in stateIdle. I've been doing some experiments setting deadlines on connections - which is defeated by tls.Conn hiding the underlying connection object from external access. Being able to set demands for IO activity during HTTP stateActive - or a more general way to do this for net.Conn without overwriting deadlines set by other API and regardless of whether the conn object is wrapped in TLS. - would be nice :) ** PS: ...much of the complexity comes from not being able to access the underlying connection in a crypto/tls.Conn object. I can understand why not exposing the connection to a Handler, but in the ConnState callback, is there any reason not to allow getting the underlying connection for a tls.Conn? |
Just to throw in my 2¢, I also have personally hit the need to set timeouts on infinite connections, and have met someone at a go meetup who independently brought up the same problem. In terms of API, why can't the underlying I agree with @FiloSottile's comment about the separate goroutine seeming upside-down. Though I sort of like it, it uses go's primitives in a clever non-obvious way which makes me feel nervous and seems like it would be hard to document. |
Having ResponseWriter have Set*Deadline() would only solve to OP problem of writing data to the client. - not the problem of stopping long running PUT which don't really transfer data at any satisfying rate. |
How about |
If that can be implemented without interfering with other deadlines, like ReadTimeout, then it would be interesting to try. |
@bradfitz Is it expected with the current master of the http2 pkg that when panicing out of a handler and a left behind goroutine blocked on a body read or response writer write, for a panic to occur that is internal to the http2 pkg? (existing TimeoutHandler would have same issue). Or is the behavior your referring to what would be made safe if that is the chosen route, but doesn't exist today? |
We ran into this issue on our project: upspin/upspin#313 We had our timeouts set quite low, as per @FiloSottile's recommendations but our users regularly send 1MB POST requests, which not all connections can deliver in a short time. We would like to be able to set the timeout after reading the request headers. This would let us increase the timeouts for authenticated users only (we don't mind if our own users want to DoS us; at least we'll know who they are), but give unauthenticated users a very short timeout. cc @robpike |
I (again) run into the issue of specifying handler-specific timeouts. @mikelnrd AFAICS your approach does not work since the connection you get from However, I may have found a similar work-a-round: ConnContext: func(ctx context.Context, c net.Conn) context.Context {
writeTimeout, cancelWriteTimeout := context.WithTimeout(ctx, 10*time.Second)
go func() {
defer cancelWriteTimeout() // Release resources
_ = <-writeTimeout.Done() // Wait for the timeout or the timeout cancel
if err := writeTimeout.Err(); err == context.DeadlineExceeded {
c.Close() // Only close the connection in case of exceeded deadline - not in case of cancellation
}
}()
return context.WithValue(ctx, ctx.Value(http.ServerContextKey), cancelWriteTimeout)
}, Here, I create a go routine that waits for the At the moment this approach has the problem that if a handler function returns (e.g. the request has been handled successfully) the write-timeout go routine will stick around until the timeout elapses. Therefore, I wrap the type cancelWriteTimeoutHandler struct{ *http.ServeMux }
func (h cancelWriteTimeoutHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
defer func() {
ctx := r.Context()
f := ctx.Value(ctx.Value(http.ServerContextKey))
if f, ok := f.(context.CancelFunc); ok {
f()
}
}()
h.ServeMux.ServeHTTP(w, r)
} This handler will cancel the write timeout when the request has been handled. This ensures that the write-timeout go routine only lives as long as the request. |
nginx have the
This can be implemented with a new |
This issue needs attention. There are many reasonable use cases, like different policies depending on authentication or varied types of requests. In my case, a single service is supposed to usually serve small responses, but sometimes after authentication and understandind the request, it turns out that the file to be served is huge and might take some legitimate user hours to download. So you have to balance between:
It is hard to comprehend how there is no way to keep a legitimate connection going, while giving unauthenticated users a proper timeout. I cannot understand how people are able to overlook this and expose Go services to the internet relying on net/http. Is there any acceptable workaround besides having a reverse proxy babysitting the Go service? Or another package to drop in place of net/http? |
When using GoatCounter directly internet-facing it's liable to keep connections around for far too long, exhausting the max. number of open file descriptors, especially with "idle" HTTP/2 connections which, unlike HTTP/1.1 Keep-Alive don't have an explicit timeout. This isn't much of a problem if you're using a proxy in front of it, as most will have some timeouts set by default (unlike Go, which has no timeouts at all by default). For the backend interface, keeping a long timeout makes sense; it reduces overhead on requests (TLS setup alone can be >200ms), but for the /count request we typically want a much shorter timeout. Unfortunately, configuring timeouts per-endpoint isn't really supported at this point, although some possible workarounds are mentioned in [1], it's all pretty ugly. We can add "Connection: close" to just close the connection, which is probably much better for almost all cases than keeping a connection open since most people only visit a single page, and keeping a connection open in the off-chance they click somewhere again probably isn't really worth it. And setting *any* timeout is better than setting *no* timeout at all! --- In the email conversation about this, the other person mentioned that some connections linger for hours, so there *may* be an additional problem either in the Go library or in GoatCounter, since this is a really long time for a connection to stay open. Then again, it could also be weird scripts or malicious users🤔 [1]: golang/go#16100
When using GoatCounter directly internet-facing it's liable to keep connections around for far too long, exhausting the max. number of open file descriptors, especially with "idle" HTTP/2 connections which, unlike HTTP/1.1 Keep-Alive don't have an explicit timeout. This isn't much of a problem if you're using a proxy in front of it, as most will have some timeouts set by default (unlike Go, which has no timeouts at all by default). For the backend interface, keeping a long timeout makes sense; it reduces overhead on requests (TLS setup alone can be >200ms), but for the /count request we typically want a much shorter timeout. Unfortunately, configuring timeouts per-endpoint isn't really supported at this point, although some possible workarounds are mentioned in [1], it's all pretty ugly. We can add "Connection: close" to just close the connection, which is probably much better for almost all cases than keeping a connection open since most people only visit a single page, and keeping a connection open in the off-chance they click somewhere again probably isn't really worth it. And setting *any* timeout is better than setting *no* timeout at all! --- In the email conversation about this, the other person mentioned that some connections linger for hours, so there *may* be an additional problem either in the Go library or in GoatCounter, since this is a really long time for a connection to stay open. Then again, it could also be weird scripts or malicious users🤔 [1]: golang/go#16100
My project also needs per-handler timeouts. So I can do something like req.SetReadTimeout() in handler after classifying incoming request. |
@bradfitz @fraenkel @FiloSottile could any of you file a specific proposal from the above discussion in a new issue? |
For what it is worth I seem to have found a simple workaround. Idea is to use the response writer from a separate go routine that is linked with a channel to the http handler. This allows the handler to to return and close the request/response when slow consumer is detected or the connection breaks. See https://github.com/karaatanassov/go_http_write_timeout/blob/989c390106c8344974b72d2207ff7de61357b6ac/main.go#L14 for example func handler(w http.ResponseWriter, r *http.Request) {
log.Print("Request received.")
defer wg.Done()
defer log.Print("Request done.")
ctx := r.Context() // If we generate error consider context.WithCancel
publisherChan := streamPublisher(ctx)
resultChan := make(chan []byte)
go writeResult(resultChan, w)
for value := range publisherChan {
select {
case resultChan <- value:
case <-r.Context().Done(): // Client has closed the socket.
log.Print("Request is Done.")
close(resultChan) // Close the result channel to exit the writer.
return
case <-time.After(1 * time.Second): // The socket is not writeable in given timeout, quit
log.Print("Output channel is not writable for 1 second. Close and exit.")
close(resultChan) // Close the result channel to exit the writer.
return
}
}
}
func writeResult(resultChan chan []byte, w http.ResponseWriter) {
defer wg.Done()
for r := range resultChan {
w.Write(r)
}
log.Print("Write Result is done.")
} In my observation the blocked I am not sure if this is safe for production so will pay with it. But it seems the situation with this blocking |
@karaatanassov this is a bit off-topic, but just avoid problems for the people who find your snippet: unless I'm overlooking something, it unfortunately doesn't really solve the underlying problem. It just "sweeps it under the rug", so to speak. Your handler will return, but the |
@costela go check out the git project. It actually releases the It is a complete solution at least in my test. I want to do experiments with http/1 and http/2 as the underlying systems are quite different. It is kind of logical solution given that the only way to tell go to close the response is by returning from the handler function. The |
(sorry if partly OT)
@karaatanassov you might need to add a LICENSE to your project, otherwise no one should be able to leverage your code as example |
@karaatanassov Your code is buggy because
|
Good point. The code technically is not using the A cleaner way could be to use I will still keep the PS I have updated the example to use cancel-able Request and not return from the handler before the response is released. https://github.com/karaatanassov/go_http_write_timeout |
I'm pretty sure it doesn't Also, @tv42's comment is correct and should be reason enough to avoid this solution. Your code most definitely is using the But let's try not to derail this issue any further. If you want, we can keep talking about this on the linked gist's comments. |
To conclude this there is no way to end requests with pending write on Linux and Windows. The workarounds I suggested work on MacOS only. It seems a legitimate bug that http requests with pending write cannot be completed/cancelled. This makes go servers precarious for internet use. It is not only timeout that is missing. |
Well ... not in any elegant way. |
I believe I've run into this same issue. If while looping over data to write back to a user they disconnect often the context will not save me from an infinite block on Write(). Example: func (a *App) Foo(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
for x:=0;x<100;x++ {
select {
case <-ctx.Done():
// User has disconnected, stopping
return
default:
// Assmue user just disconnected
w.Write([]byte("example output")) //blocks until http server write timeout
}
}
} In my real project I might have to write back data for several minutes so I can't just set a low WriteTimeout. Proposed solutions
|
Until Go supports something like this: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_read_timeout if you need to read large requests or serve large responses (for example upload/download files), it is better to expose your Go application behind a reverse proxy like nginx |
Based on the gist here, I ended up with this hack/workaround
For my use case I use 60 seconds for http.Server and listener Read/Write timeouts. This way slowloris is not more an issue, hope this can help others and a that a proper solution will be included in |
A
Handler
has no way of changing the underlying connection Deadline, since it has no access to thenet.Conn
(except by maintaining a map fromRemoteAddr
tonet.Conn
viaServer.ConnState
, but it's more than anyone should need to do). Moreover, it can't implement a timeout itself because theClose
method of theResponseWriter
implementation is not documented to unblock concurrentWrite
s.This means that if the server has a WriteTimeout, the connection has a definite lifespan, and streaming is impossible. So servers with any streaming endpoints are forced not to implement timeouts at all on the entire Server.
A possible solution might be to expose the net.Conn in the Context. Another could be to allow interface upgrades to the
SetDeadline
methods on ResponseWriter. Yet another would be to make(*response).Close
unblock(*response).Write
.The text was updated successfully, but these errors were encountered: