New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

net/http: no way of manipulating timeouts in Handler #16100

Open
FiloSottile opened this Issue Jun 17, 2016 · 47 comments

Comments

Projects
None yet
@FiloSottile
Member

FiloSottile commented Jun 17, 2016

A Handler has no way of changing the underlying connection Deadline, since it has no access to the net.Conn (except by maintaining a map from RemoteAddr to net.Conn via Server.ConnState, but it's more than anyone should need to do). Moreover, it can't implement a timeout itself because the Close method of the ResponseWriter implementation is not documented to unblock concurrent Writes.

This means that if the server has a WriteTimeout, the connection has a definite lifespan, and streaming is impossible. So servers with any streaming endpoints are forced not to implement timeouts at all on the entire Server.

A possible solution might be to expose the net.Conn in the Context. Another could be to allow interface upgrades to the SetDeadline methods on ResponseWriter. Yet another would be to make (*response).Close unblock (*response).Write.

@ianlancetaylor ianlancetaylor added this to the Go1.8 milestone Jun 17, 2016

@elithrar

This comment has been minimized.

Show comment
Hide comment
@elithrar

elithrar Jun 24, 2016

Given that we store the *http.Server in the request context, making net.Conn available in the context via (e.g.) ConnContextKey could be an option. This could be opt-in via a field on the http.Server as stuffing the request context with things by default is not ideal.

elithrar commented Jun 24, 2016

Given that we store the *http.Server in the request context, making net.Conn available in the context via (e.g.) ConnContextKey could be an option. This could be opt-in via a field on the http.Server as stuffing the request context with things by default is not ideal.

@noblehng

This comment has been minimized.

Show comment
Hide comment
@noblehng

noblehng Jul 1, 2016

I think this can be done with providing a custom net.Listener to (*http.Server).Serve.

That is embedding a *net.TCPListener and overwriting the Accept method, which return a custom net.Conn. The custom net.Conn will embed a *net.TCPConn and overwrite the Write method.

The overwritten Write method could reset the write deadline on every write, or use a atomic counter to reset the write deadline on some numbers/bytes of consecutively write. But for truly on demand write deadline resetting, one still need some way to do that on the higer level handler side.

Since a http/2 connection can do multiplexing, it would be helpful to have a set of timeouts for individual stream. When a stream hang on client, we could use those timeouts to release resources associated with that stream, this is not possible with setting deadlines on the lower level underlying connection.

noblehng commented Jul 1, 2016

I think this can be done with providing a custom net.Listener to (*http.Server).Serve.

That is embedding a *net.TCPListener and overwriting the Accept method, which return a custom net.Conn. The custom net.Conn will embed a *net.TCPConn and overwrite the Write method.

The overwritten Write method could reset the write deadline on every write, or use a atomic counter to reset the write deadline on some numbers/bytes of consecutively write. But for truly on demand write deadline resetting, one still need some way to do that on the higer level handler side.

Since a http/2 connection can do multiplexing, it would be helpful to have a set of timeouts for individual stream. When a stream hang on client, we could use those timeouts to release resources associated with that stream, this is not possible with setting deadlines on the lower level underlying connection.

@bradfitz

This comment has been minimized.

Show comment
Hide comment
@bradfitz

bradfitz Sep 1, 2016

Member

@FiloSottile, we won't be exposing net.Conn to handlers, or let users explicitly set deadlines on conns. All public APIs need to consider both HTTP/1 and HTTP/2.

You propose many solutions, but I'd like to get a clear statement of the problem first. Even the title of this bug seems like a description of a lack of solution, rather than a problem that's not solvable.

I agree that the WriteTimeout is ill-specified. See also my comment about ReadTimeout here: #16958 (comment)

You allude to your original problem here:

So servers with any streaming endpoints are forced not to implement timeouts at all on the entire Server.

So you have an infinite stream, and you want to forcibly abort that stream if the user isn't reading fast enough? In HTTP/1, that means closing the TCP connection. In HTTP/2, that means sending a RST_STREAM.

I guess we need to define WriteTimeout before we make progress on this bug.

What do you think WriteTimeout should mean?

Member

bradfitz commented Sep 1, 2016

@FiloSottile, we won't be exposing net.Conn to handlers, or let users explicitly set deadlines on conns. All public APIs need to consider both HTTP/1 and HTTP/2.

You propose many solutions, but I'd like to get a clear statement of the problem first. Even the title of this bug seems like a description of a lack of solution, rather than a problem that's not solvable.

I agree that the WriteTimeout is ill-specified. See also my comment about ReadTimeout here: #16958 (comment)

You allude to your original problem here:

So servers with any streaming endpoints are forced not to implement timeouts at all on the entire Server.

So you have an infinite stream, and you want to forcibly abort that stream if the user isn't reading fast enough? In HTTP/1, that means closing the TCP connection. In HTTP/2, that means sending a RST_STREAM.

I guess we need to define WriteTimeout before we make progress on this bug.

What do you think WriteTimeout should mean?

@thincal

This comment has been minimized.

Show comment
Hide comment
@thincal

thincal Sep 29, 2016

Why not provide a timeout version of Read/Write ? Is there any reason ?

Request.Body.TimedRead(p []byte, timeout time.Duration)
Request.Body.TimedWrite(p []byte, timeout time.Duration)

My concrete case is:

case1 about read

client send data via. a POST API provided by server, but if the network disconnected during the server read request data via. request.Body.Read, the server will be blocked at Body.Read forever.

case2 about write

similar that it's server write data back to client.

So I need a timeout mechanism that server can be unblocked from this Read/Write.
This is very intuitive requirement, might be there are other solution already ?

Thanks.

thincal commented Sep 29, 2016

Why not provide a timeout version of Read/Write ? Is there any reason ?

Request.Body.TimedRead(p []byte, timeout time.Duration)
Request.Body.TimedWrite(p []byte, timeout time.Duration)

My concrete case is:

case1 about read

client send data via. a POST API provided by server, but if the network disconnected during the server read request data via. request.Body.Read, the server will be blocked at Body.Read forever.

case2 about write

similar that it's server write data back to client.

So I need a timeout mechanism that server can be unblocked from this Read/Write.
This is very intuitive requirement, might be there are other solution already ?

Thanks.

@bradfitz

This comment has been minimized.

Show comment
Hide comment
@bradfitz

bradfitz Oct 22, 2016

Member

What if the mechanism to abort blocked read or write I/O is to just exit from the ServeHTTP function?

So then handlers can do their own timers before reads/writes and cancel or extend them as necessary. That implies that all such reads & writes would need to be done in a separate goroutine. And then if they're too slow and the timer fires before the hander's I/O goroutine completes, just return from ServeHTTP and we'd either kill the TCP connection (for http1) or do a RST_STREAM (for http2).

I suppose then the problem is there's a small window where the I/O might've completed just in the window between the timer firing and the ServeHTTP code exiting, so then once the http package sees ServeHTTP is done, it may not see any calls still in Read or Write, and might then not fail the connection as intended.

That suggests we need some sort of explicit means of aborting a response. http1 has that with Hijacker.

I've recommended in the past that people just panic, since the http package has that weird feature where it recovers panics.

Would that work? Do all your I/O in separate goroutines, and panic if they take too long?

We might get data races with http1 and/or http2 with that, but I could probably make it work if you like the idea.

Member

bradfitz commented Oct 22, 2016

What if the mechanism to abort blocked read or write I/O is to just exit from the ServeHTTP function?

So then handlers can do their own timers before reads/writes and cancel or extend them as necessary. That implies that all such reads & writes would need to be done in a separate goroutine. And then if they're too slow and the timer fires before the hander's I/O goroutine completes, just return from ServeHTTP and we'd either kill the TCP connection (for http1) or do a RST_STREAM (for http2).

I suppose then the problem is there's a small window where the I/O might've completed just in the window between the timer firing and the ServeHTTP code exiting, so then once the http package sees ServeHTTP is done, it may not see any calls still in Read or Write, and might then not fail the connection as intended.

That suggests we need some sort of explicit means of aborting a response. http1 has that with Hijacker.

I've recommended in the past that people just panic, since the http package has that weird feature where it recovers panics.

Would that work? Do all your I/O in separate goroutines, and panic if they take too long?

We might get data races with http1 and/or http2 with that, but I could probably make it work if you like the idea.

@gopherbot

This comment has been minimized.

Show comment
Hide comment
@gopherbot

gopherbot commented Oct 25, 2016

CL https://golang.org/cl/32024 mentions this issue.

gopherbot pushed a commit that referenced this issue Oct 26, 2016

net/http: add Server.ReadHeaderTimeout, IdleTimeout, document WriteTi…
…meout

Updates #14204
Updates #16450
Updates #16100

Change-Id: Ic283bcec008a8e0bfbcfd8531d30fffe71052531
Reviewed-on: https://go-review.googlesource.com/32024
Reviewed-by: Tom Bergan <tombergan@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
@bradfitz

This comment has been minimized.

Show comment
Hide comment
@bradfitz

bradfitz Nov 1, 2016

Member

Deferring to Go 1.9 due to lack of reply.

Member

bradfitz commented Nov 1, 2016

Deferring to Go 1.9 due to lack of reply.

@bradfitz bradfitz modified the milestones: Go1.9, Go1.8 Nov 1, 2016

@c4milo

This comment has been minimized.

Show comment
Hide comment
@c4milo

c4milo Nov 1, 2016

Member

@bradfitz I faced a similar problem when I started with Go. I had a service in Node using HTTP 1.1 with chunked transfer encoding, it implemented bidirectional streaming over HTTP 1.1. On each write or read, the deadline/timeout would get reset in order for the connection to remain opened. It worked on Node.js which uses libuv. This predated websockets and avoided long polling. It is a little known technique but is somewhat mentioned in https://tools.ietf.org/rfc/rfc6202.txt (numeral 3). I also remember Twitter's streaming API using it. Anyway, migrating my Node service to Go was not possible because of the use of absolute timeouts/deadlines.

So, I guess @FiloSottile is referring to a similar case. At least in my particular scenario, what I wanted to do was to be able to reset the connection's write and read timeout/deadline so that the connection remained open but still got closed if it became idle.

Member

c4milo commented Nov 1, 2016

@bradfitz I faced a similar problem when I started with Go. I had a service in Node using HTTP 1.1 with chunked transfer encoding, it implemented bidirectional streaming over HTTP 1.1. On each write or read, the deadline/timeout would get reset in order for the connection to remain opened. It worked on Node.js which uses libuv. This predated websockets and avoided long polling. It is a little known technique but is somewhat mentioned in https://tools.ietf.org/rfc/rfc6202.txt (numeral 3). I also remember Twitter's streaming API using it. Anyway, migrating my Node service to Go was not possible because of the use of absolute timeouts/deadlines.

So, I guess @FiloSottile is referring to a similar case. At least in my particular scenario, what I wanted to do was to be able to reset the connection's write and read timeout/deadline so that the connection remained open but still got closed if it became idle.

@drakkan

This comment has been minimized.

Show comment
Hide comment
@drakkan

drakkan Nov 2, 2016

A timeout that restart on each write or read would be useful for my use case too and if implemented I could rewrite in go a legacy c++ application.

If the timeout could be set per request and not only globally would be a good plus

drakkan commented Nov 2, 2016

A timeout that restart on each write or read would be useful for my use case too and if implemented I could rewrite in go a legacy c++ application.

If the timeout could be set per request and not only globally would be a good plus

@FiloSottile

This comment has been minimized.

Show comment
Hide comment
@FiloSottile

FiloSottile Dec 15, 2016

Member

@bradfitz Sorry for not following up on this one, it turned out to be more nuanced than I thought (HTTP/2, brain, HTTP/2 exists) and didn't have the time to look at it further.

Doing I/O in a goroutine instead of the timer sounds upside-down, but I don't have specific points to make against it. What would make more sense to me would be a way to cancel the whole thing, which would make Body.Read/ResponseWriter.Write return an error that would then be handled normally.

Question, without such a mechanism, how is the user supposed to timeout a read when using ReadHeaderTimeout?

    // ReadHeaderTimeout is the amount of time allowed to read
    // request headers. The connection's read deadline is reset
    // after reading the headers and the Handler can decide what
    // is considered too slow for the body.
Member

FiloSottile commented Dec 15, 2016

@bradfitz Sorry for not following up on this one, it turned out to be more nuanced than I thought (HTTP/2, brain, HTTP/2 exists) and didn't have the time to look at it further.

Doing I/O in a goroutine instead of the timer sounds upside-down, but I don't have specific points to make against it. What would make more sense to me would be a way to cancel the whole thing, which would make Body.Read/ResponseWriter.Write return an error that would then be handled normally.

Question, without such a mechanism, how is the user supposed to timeout a read when using ReadHeaderTimeout?

    // ReadHeaderTimeout is the amount of time allowed to read
    // request headers. The connection's read deadline is reset
    // after reading the headers and the Handler can decide what
    // is considered too slow for the body.
@FiloSottile

This comment has been minimized.

Show comment
Hide comment
@FiloSottile

FiloSottile Dec 15, 2016

Member

One possible solution would be for Request.Body to be documented to allow Close concurrent with Read, and not to panic on double Close, and for ResponseWriter to be upgradeable to io.Closer.

Member

FiloSottile commented Dec 15, 2016

One possible solution would be for Request.Body to be documented to allow Close concurrent with Read, and not to panic on double Close, and for ResponseWriter to be upgradeable to io.Closer.

@peter-mogensen

This comment has been minimized.

Show comment
Hide comment
@peter-mogensen

peter-mogensen Dec 16, 2016

For what it's worth ... I think we have a related use case. (only for reading requests and not for writing)

We have some slow clients doing large PUT requests - sometime on unstable connections which dies.
But we would like to allow these PUT requests as long as there is actually progress and Read() returns data. Preferably only on the endpoints where that should be allowed.

Currently, though we provide a net.Listener/net.Conn which sets Deadline on every Read/Write ... but that seems not a viable solution since it would interfere with timeouts set by net/http.Server.

peter-mogensen commented Dec 16, 2016

For what it's worth ... I think we have a related use case. (only for reading requests and not for writing)

We have some slow clients doing large PUT requests - sometime on unstable connections which dies.
But we would like to allow these PUT requests as long as there is actually progress and Read() returns data. Preferably only on the endpoints where that should be allowed.

Currently, though we provide a net.Listener/net.Conn which sets Deadline on every Read/Write ... but that seems not a viable solution since it would interfere with timeouts set by net/http.Server.

@peter-mogensen

This comment has been minimized.

Show comment
Hide comment
@peter-mogensen

peter-mogensen Feb 22, 2017

Would that work? Do all your I/O in separate goroutines, and panic if they take too long?

The problem is defining what is "too long".
Often what you want is not to kill long running IO in absolute terms, but to kill "too slow" connections.
I don't want to kill a long running PUT request as long as it's actually transferring data. But I would kill a request only sending 1 byte/second.

These demands for IO activity should (for HTTP) only be enforced during ConnState stateActive (and stateNew). It would be OK for a connection to not have any IO activity in stateIdle.

I've been doing some experiments setting deadlines on connections - which is defeated by tls.Conn hiding the underlying connection object from external access.
Also trying to have a reaper go-routine Close connections with no IO activity. ... which becomes equally messy, although not impossible. (**)

Being able to set demands for IO activity during HTTP stateActive - or a more general way to do this for net.Conn without overwriting deadlines set by other API and regardless of whether the conn object is wrapped in TLS. - would be nice :)

** PS: ...much of the complexity comes from not being able to access the underlying connection in a crypto/tls.Conn object. I can understand why not exposing the connection to a Handler, but in the ConnState callback, is there any reason not to allow getting the underlying connection for a tls.Conn?

peter-mogensen commented Feb 22, 2017

Would that work? Do all your I/O in separate goroutines, and panic if they take too long?

The problem is defining what is "too long".
Often what you want is not to kill long running IO in absolute terms, but to kill "too slow" connections.
I don't want to kill a long running PUT request as long as it's actually transferring data. But I would kill a request only sending 1 byte/second.

These demands for IO activity should (for HTTP) only be enforced during ConnState stateActive (and stateNew). It would be OK for a connection to not have any IO activity in stateIdle.

I've been doing some experiments setting deadlines on connections - which is defeated by tls.Conn hiding the underlying connection object from external access.
Also trying to have a reaper go-routine Close connections with no IO activity. ... which becomes equally messy, although not impossible. (**)

Being able to set demands for IO activity during HTTP stateActive - or a more general way to do this for net.Conn without overwriting deadlines set by other API and regardless of whether the conn object is wrapped in TLS. - would be nice :)

** PS: ...much of the complexity comes from not being able to access the underlying connection in a crypto/tls.Conn object. I can understand why not exposing the connection to a Handler, but in the ConnState callback, is there any reason not to allow getting the underlying connection for a tls.Conn?

@pwaller

This comment has been minimized.

Show comment
Hide comment
@pwaller

pwaller Feb 22, 2017

Contributor

Just to throw in my 2¢, I also have personally hit the need to set timeouts on infinite connections, and have met someone at a go meetup who independently brought up the same problem.

In terms of API, why can't the underlying ResponseWriter implement Set{Read,Write}Deadline? It seems to me that they could happily do the write thing of sending STREAM_RST for http2. Just because it happens to implement the same API you'd use to timeout TCP connections doesn't mean it has to time out the TCP connection itself if there is another layer that can be used.

I agree with @FiloSottile's comment about the separate goroutine seeming upside-down. Though I sort of like it, it uses go's primitives in a clever non-obvious way which makes me feel nervous and seems like it would be hard to document.

Contributor

pwaller commented Feb 22, 2017

Just to throw in my 2¢, I also have personally hit the need to set timeouts on infinite connections, and have met someone at a go meetup who independently brought up the same problem.

In terms of API, why can't the underlying ResponseWriter implement Set{Read,Write}Deadline? It seems to me that they could happily do the write thing of sending STREAM_RST for http2. Just because it happens to implement the same API you'd use to timeout TCP connections doesn't mean it has to time out the TCP connection itself if there is another layer that can be used.

I agree with @FiloSottile's comment about the separate goroutine seeming upside-down. Though I sort of like it, it uses go's primitives in a clever non-obvious way which makes me feel nervous and seems like it would be hard to document.

@peter-mogensen

This comment has been minimized.

Show comment
Hide comment
@peter-mogensen

peter-mogensen Feb 22, 2017

Having ResponseWriter have Set*Deadline() would only solve to OP problem of writing data to the client. - not the problem of stopping long running PUT which don't really transfer data at any satisfying rate.

peter-mogensen commented Feb 22, 2017

Having ResponseWriter have Set*Deadline() would only solve to OP problem of writing data to the client. - not the problem of stopping long running PUT which don't really transfer data at any satisfying rate.

@pwaller

This comment has been minimized.

Show comment
Hide comment
@pwaller

pwaller Feb 22, 2017

Contributor

How about (*http.Request) or (*http.Request).Body implementing Set*Deadline?

Contributor

pwaller commented Feb 22, 2017

How about (*http.Request) or (*http.Request).Body implementing Set*Deadline?

@peter-mogensen

This comment has been minimized.

Show comment
Hide comment
@peter-mogensen

peter-mogensen Feb 22, 2017

If that can be implemented without interfering with other deadlines, like ReadTimeout, then it would be interesting to try.

peter-mogensen commented Feb 22, 2017

If that can be implemented without interfering with other deadlines, like ReadTimeout, then it would be interesting to try.

@gaillard

This comment has been minimized.

Show comment
Hide comment
@gaillard

gaillard Mar 3, 2017

@bradfitz Is it expected with the current master of the http2 pkg that when panicing out of a handler and a left behind goroutine blocked on a body read or response writer write, for a panic to occur that is internal to the http2 pkg? (existing TimeoutHandler would have same issue). Or is the behavior your referring to what would be made safe if that is the chosen route, but doesn't exist today?

gaillard commented Mar 3, 2017

@bradfitz Is it expected with the current master of the http2 pkg that when panicing out of a handler and a left behind goroutine blocked on a body read or response writer write, for a panic to occur that is internal to the http2 pkg? (existing TimeoutHandler would have same issue). Or is the behavior your referring to what would be made safe if that is the chosen route, but doesn't exist today?

@adg

This comment has been minimized.

Show comment
Hide comment
@adg

adg Mar 10, 2017

Contributor

We ran into this issue on our project: upspin/upspin#313

We had our timeouts set quite low, as per @FiloSottile's recommendations but our users regularly send 1MB POST requests, which not all connections can deliver in a short time.

We would like to be able to set the timeout after reading the request headers. This would let us increase the timeouts for authenticated users only (we don't mind if our own users want to DoS us; at least we'll know who they are), but give unauthenticated users a very short timeout.

cc @robpike

Contributor

adg commented Mar 10, 2017

We ran into this issue on our project: upspin/upspin#313

We had our timeouts set quite low, as per @FiloSottile's recommendations but our users regularly send 1MB POST requests, which not all connections can deliver in a short time.

We would like to be able to set the timeout after reading the request headers. This would let us increase the timeouts for authenticated users only (we don't mind if our own users want to DoS us; at least we'll know who they are), but give unauthenticated users a very short timeout.

cc @robpike

@peter-mogensen

This comment has been minimized.

Show comment
Hide comment
@peter-mogensen

peter-mogensen Mar 10, 2017

I think we managed to make a reasonable simple solution where we can have each connection monitored by a reaper go-routine which will close it if there's been no traffic for a while. We can turn on the "reaping" in the ConnState callback on StateActive and turn it off on StateIdle.
This, of course, doesn't solve anything if the client is just slow, but still sends a few bytes now and then. ... and it doesn't solve the problem of wanting to control timeouts in a specific http.Handler.
But it does clean up connections without a living peer.

peter-mogensen commented Mar 10, 2017

I think we managed to make a reasonable simple solution where we can have each connection monitored by a reaper go-routine which will close it if there's been no traffic for a while. We can turn on the "reaping" in the ConnState callback on StateActive and turn it off on StateIdle.
This, of course, doesn't solve anything if the client is just slow, but still sends a few bytes now and then. ... and it doesn't solve the problem of wanting to control timeouts in a specific http.Handler.
But it does clean up connections without a living peer.

@bradfitz bradfitz removed their assignment Jun 8, 2017

@bradfitz bradfitz modified the milestones: Go1.10, Go1.9 Jun 8, 2017

@bradfitz

This comment has been minimized.

Show comment
Hide comment
@bradfitz

bradfitz Jun 8, 2017

Member

I guess this didn't happen for Go 1.9. We still need a plan.

/cc @tombergan

Member

bradfitz commented Jun 8, 2017

I guess this didn't happen for Go 1.9. We still need a plan.

/cc @tombergan

@FiloSottile

This comment has been minimized.

Show comment
Hide comment
@FiloSottile

FiloSottile Jun 17, 2017

Member

It took me an awfully long time, but it finally clicked what didn't sit right with the idea of making a return from ServeHTTP the way to break I/O: it overloads the http.Handler interface with a behaviour that won't be guaranteed by any other user of the interface. Anything from tests to frameworks will not replicate the behaviour, and there will be no way for the code to know (which is the point of interface upgrades). This makes the weird inversion of control where you do the main work in a child goroutine especially bad, because if you start the I/O goroutine, then return from ServeHTTP expecting that to unblock the I/O goroutine, and it actually doesn't, you leaked a goroutine.

And anyway, I don't think we should ever encourage starting a goroutine and losing track of it before knowing it returned.

I think we should go for some sort of interface upgrade to io.Closer, possibly in a way that works, or is easy to support, with ServeHTTP composition (aka middlewares).

Member

FiloSottile commented Jun 17, 2017

It took me an awfully long time, but it finally clicked what didn't sit right with the idea of making a return from ServeHTTP the way to break I/O: it overloads the http.Handler interface with a behaviour that won't be guaranteed by any other user of the interface. Anything from tests to frameworks will not replicate the behaviour, and there will be no way for the code to know (which is the point of interface upgrades). This makes the weird inversion of control where you do the main work in a child goroutine especially bad, because if you start the I/O goroutine, then return from ServeHTTP expecting that to unblock the I/O goroutine, and it actually doesn't, you leaked a goroutine.

And anyway, I don't think we should ever encourage starting a goroutine and losing track of it before knowing it returned.

I think we should go for some sort of interface upgrade to io.Closer, possibly in a way that works, or is easy to support, with ServeHTTP composition (aka middlewares).

@tombergan

This comment has been minimized.

Show comment
Hide comment
@tombergan

tombergan Jun 23, 2017

Contributor

This bug is jumping around a bit, so I've tried to summarize the use cases below. Please respond if I missed something important. I will keep this comment updated with the list of use cases.

For any request:

  • Abort the request if it takes too long to read the request headers (this was added already)
  • Abort the request if the request body is being uploaded too slowly
  • Custom timeout to read the entire request body based on handler-specific info (e.g., whether or not the request is authenticated).
  • Custom timeout to wait for the next message chunk (e.g., in streaming protocols)

For any response:

  • Abort the request if the response body is being consumed too slowly
  • Custom timeout to write the entire response body based on handler-specific info (e.g., size of the response)
  • Custom timeout to write the next message chunk (e.g., in streaming protocols)
Contributor

tombergan commented Jun 23, 2017

This bug is jumping around a bit, so I've tried to summarize the use cases below. Please respond if I missed something important. I will keep this comment updated with the list of use cases.

For any request:

  • Abort the request if it takes too long to read the request headers (this was added already)
  • Abort the request if the request body is being uploaded too slowly
  • Custom timeout to read the entire request body based on handler-specific info (e.g., whether or not the request is authenticated).
  • Custom timeout to wait for the next message chunk (e.g., in streaming protocols)

For any response:

  • Abort the request if the response body is being consumed too slowly
  • Custom timeout to write the entire response body based on handler-specific info (e.g., size of the response)
  • Custom timeout to write the next message chunk (e.g., in streaming protocols)
@gopherbot

This comment has been minimized.

Show comment
Hide comment
@gopherbot

gopherbot commented Jul 3, 2017

CL https://golang.org/cl/47390 mentions this issue.

@matope

This comment has been minimized.

Show comment
Hide comment
@matope

matope Jul 3, 2017

Contributor

Hi guys. I tried to implement a patch for this issue.
I made 2 modifications on net/http in order to timeout slow Request.Body.Read() and RequestWriter.Write().

https://golang.org/cl/47390

Make http.Request.Body.Close() abort pending Read().

This is for timeout slow upload.
For HTTP/2, requestBody.Close() have already been able to abort Read(). So, somehow I made HTTP/1.1 work like that.

Allow http.ResponseWriter upgrade to io.Closer to Close the connection/stream.

This is for slow download.

  1. For HTTP/1.1, I simply close the underlying connection.
  2. For HTTP/2, I send out RST_STREAM frame to kill the stream.
    • I edited h2_bundle directly for the sake of simplicity. If you think this approach will work well, I’m going to make a separate patch for x/net/http2.

Now, we can unblock in-flight Read()Write()s by calling Close() concurrently on the body.
I think that with those modifications, we can easily implement timeouts on Read()/Write() body like below.
How do you think about this approach?

type timeoutReadCloser struct {
  io.ReadCloser
  timeout time.Duration
} 

// Read Reads from underlying ReadCloser and automatically calls the Close() 
// if the Read() takes longer than timeout.
func (r *timeoutReadCloser) Read(p []byte) (int, error) {

  t := time.AfterFunc(r.timeout, func() {r.Close()})
  defer t.Stop()
  return r.Read(p)

}

func handler( w http.ResponseWriter, r *http.Request) {
  n, err := io.Copy(ioutil.Discard, timeoutReadCloser{r, 10*time.Second})
  //
}

Contributor

matope commented Jul 3, 2017

Hi guys. I tried to implement a patch for this issue.
I made 2 modifications on net/http in order to timeout slow Request.Body.Read() and RequestWriter.Write().

https://golang.org/cl/47390

Make http.Request.Body.Close() abort pending Read().

This is for timeout slow upload.
For HTTP/2, requestBody.Close() have already been able to abort Read(). So, somehow I made HTTP/1.1 work like that.

Allow http.ResponseWriter upgrade to io.Closer to Close the connection/stream.

This is for slow download.

  1. For HTTP/1.1, I simply close the underlying connection.
  2. For HTTP/2, I send out RST_STREAM frame to kill the stream.
    • I edited h2_bundle directly for the sake of simplicity. If you think this approach will work well, I’m going to make a separate patch for x/net/http2.

Now, we can unblock in-flight Read()Write()s by calling Close() concurrently on the body.
I think that with those modifications, we can easily implement timeouts on Read()/Write() body like below.
How do you think about this approach?

type timeoutReadCloser struct {
  io.ReadCloser
  timeout time.Duration
} 

// Read Reads from underlying ReadCloser and automatically calls the Close() 
// if the Read() takes longer than timeout.
func (r *timeoutReadCloser) Read(p []byte) (int, error) {

  t := time.AfterFunc(r.timeout, func() {r.Close()})
  defer t.Stop()
  return r.Read(p)

}

func handler( w http.ResponseWriter, r *http.Request) {
  n, err := io.Copy(ioutil.Discard, timeoutReadCloser{r, 10*time.Second})
  //
}

@tombergan

This comment has been minimized.

Show comment
Hide comment
@tombergan

tombergan Jul 5, 2017

Contributor

I'm not yet sure if I like using Close in that way. It's certainly an expedient solution, but is it the right API?

If we were doing everything from scratch, body.Read and rw.Write would take a context, which would naturally allow the caller to specify deadlines on each call. I would like to explicitly reject that approach before using @matope's solution. Otherwise, we could end up in a situation like we're in for Request cancellation, where there are three ways to cancel a request, two of which are deprecated (Transport.CancelRequest, Request.Cancel).

Contributor

tombergan commented Jul 5, 2017

I'm not yet sure if I like using Close in that way. It's certainly an expedient solution, but is it the right API?

If we were doing everything from scratch, body.Read and rw.Write would take a context, which would naturally allow the caller to specify deadlines on each call. I would like to explicitly reject that approach before using @matope's solution. Otherwise, we could end up in a situation like we're in for Request cancellation, where there are three ways to cancel a request, two of which are deprecated (Transport.CancelRequest, Request.Cancel).

@matope

This comment has been minimized.

Show comment
Hide comment
@matope

matope Jul 8, 2017

Contributor

@tombergan Thank you for your feed back!

It's certainly an expedient solution, but is it the right API?

The reason why I thought using Close() would be a good way to do this is I noticed that x/net/http2.requestBody.Close() already abort pending Read(). See this.
I thought this semantics is good and aborting Read()/Write()s of all h1/h2 and Request/Response body in a same way would be a reasonable and good idea.


I think this additional semantics is so low-level and it enable us handle timeouts in some custom high-level ways which fit to your own app like I demonstrated as timeoutReader. Of course you can implement context based one. Please see following example.

If we're going to provide some high-level API to timeout, I think it might be deprecated someday. But IMO, this additional semantics are reasonably small and low-level and so it will not be stale (I hope).

An example of context based cancelation. (I didn’t test. Sorry if this does not work)

func handler( w http.ResponseWriter, r *http.Request) {

  ctx, cancel := context.WithDeadline( r.Context(), 10 * time.Second)

  defer cancel()

  closed := make(chan struct{})

  go func() {

    <-ctx.Done()

    r.Body.Close() // We can do this if concurrent Read() and Close() are allowed :)
    closed<- struct{}

  }
  defer func() { <-closed }

  n, err := io.Copy(ioutil.Discard, r.Body)
  //
}

Contributor

matope commented Jul 8, 2017

@tombergan Thank you for your feed back!

It's certainly an expedient solution, but is it the right API?

The reason why I thought using Close() would be a good way to do this is I noticed that x/net/http2.requestBody.Close() already abort pending Read(). See this.
I thought this semantics is good and aborting Read()/Write()s of all h1/h2 and Request/Response body in a same way would be a reasonable and good idea.


I think this additional semantics is so low-level and it enable us handle timeouts in some custom high-level ways which fit to your own app like I demonstrated as timeoutReader. Of course you can implement context based one. Please see following example.

If we're going to provide some high-level API to timeout, I think it might be deprecated someday. But IMO, this additional semantics are reasonably small and low-level and so it will not be stale (I hope).

An example of context based cancelation. (I didn’t test. Sorry if this does not work)

func handler( w http.ResponseWriter, r *http.Request) {

  ctx, cancel := context.WithDeadline( r.Context(), 10 * time.Second)

  defer cancel()

  closed := make(chan struct{})

  go func() {

    <-ctx.Done()

    r.Body.Close() // We can do this if concurrent Read() and Close() are allowed :)
    closed<- struct{}

  }
  defer func() { <-closed }

  n, err := io.Copy(ioutil.Discard, r.Body)
  //
}

@gopherbot

This comment has been minimized.

Show comment
Hide comment
@gopherbot

gopherbot commented Jul 8, 2017

CL https://golang.org/cl/47870 mentions this issue.

@tombergan

This comment has been minimized.

Show comment
Hide comment
@tombergan

tombergan Jul 12, 2017

Contributor

The reason why I thought using Close() would be a good way to do this is I noticed that x/net/http2.requestBody.Close() already abort pending Read().

That's an argument for why Close is an expedient solution, but not an argument for why Close is the right API. The Close solution forces callers to write little adapters, such as your timeoutReadCloser adapter or your context adapter. Why should callers have to go through so much trouble?

The contexts solution reuses a standard construct, contexts, which Go programmers should already be familiar with. There exist standard ways of manipulating contexts -- no custom adapters needed. We may even want to define interfaces to io, such as io.ContextReader, as well as standard adapters that convert a pair of (context.Context, io.ContextReader) to an io.Reader. For example, the response writers for HTTP/1 and HTTP/2 might implement io.ContextReader as defined below:

package io
type ContextReader interface {
  ReadContext(ctx context.Context, buf []byte) (int, error)
}

func ReaderUsingContext(ctx context.Context, r ContextReader) io.Reader {
  return readerUsingContext{ctx, r}
}

func (r readerUsingContext) Read(p []byte) (int, err) {
  return r.r.ReadContext(r.ctx)
}

The only ugliness (to me) is that callers would need explicit casts, but the Close solution also has this problem. I guess this boils down to whether or not you believe the standard library should be more context-ified. This is a larger question that is worth raising beyond this issue, and I will do so after GopherCon this week.

Contributor

tombergan commented Jul 12, 2017

The reason why I thought using Close() would be a good way to do this is I noticed that x/net/http2.requestBody.Close() already abort pending Read().

That's an argument for why Close is an expedient solution, but not an argument for why Close is the right API. The Close solution forces callers to write little adapters, such as your timeoutReadCloser adapter or your context adapter. Why should callers have to go through so much trouble?

The contexts solution reuses a standard construct, contexts, which Go programmers should already be familiar with. There exist standard ways of manipulating contexts -- no custom adapters needed. We may even want to define interfaces to io, such as io.ContextReader, as well as standard adapters that convert a pair of (context.Context, io.ContextReader) to an io.Reader. For example, the response writers for HTTP/1 and HTTP/2 might implement io.ContextReader as defined below:

package io
type ContextReader interface {
  ReadContext(ctx context.Context, buf []byte) (int, error)
}

func ReaderUsingContext(ctx context.Context, r ContextReader) io.Reader {
  return readerUsingContext{ctx, r}
}

func (r readerUsingContext) Read(p []byte) (int, err) {
  return r.r.ReadContext(r.ctx)
}

The only ugliness (to me) is that callers would need explicit casts, but the Close solution also has this problem. I guess this boils down to whether or not you believe the standard library should be more context-ified. This is a larger question that is worth raising beyond this issue, and I will do so after GopherCon this week.

@matope

This comment has been minimized.

Show comment
Hide comment
@matope

matope Jul 12, 2017

Contributor

@tombergan I see what you want to propose.

I guess this boils down to whether or not you believe the standard library should be more context-ified.

I agree with you on this point. At least for now, context is not a standard or familiar (or even efficient) way to abort single Reads. So, providing ContextReader by net/http would not reasonable for now. I think you need to propose it as a standard first (on golang-dev?).

Main motivation of this issue is to implement timeouts based on blocking times of Read/Write to shut down slow clients (efficiently). If we provide context-based API instead of Close(), we need to implement another Reader to achieve the goal.

Contributor

matope commented Jul 12, 2017

@tombergan I see what you want to propose.

I guess this boils down to whether or not you believe the standard library should be more context-ified.

I agree with you on this point. At least for now, context is not a standard or familiar (or even efficient) way to abort single Reads. So, providing ContextReader by net/http would not reasonable for now. I think you need to propose it as a standard first (on golang-dev?).

Main motivation of this issue is to implement timeouts based on blocking times of Read/Write to shut down slow clients (efficiently). If we provide context-based API instead of Close(), we need to implement another Reader to achieve the goal.

@matope

This comment has been minimized.

Show comment
Hide comment
@matope

matope Sep 3, 2017

Contributor

@bradfitz Hi, now that Go1.9 has been released, I'd like to proceed this issue.

For now, I've proposed 2 CLs and they're in review. One is for net/http and the other is for x/net/http2.

I think at least they worked as a POC to show that we're able to unblock pending I/Os on Body by using Clsoe(). But I'm not 100% sure if closing TCP connections by using Close() is a good design for HTTP/1.1 Bodies. (Would it cause unacceptable incompatibilities on its semantics?) I'd be happy if you could give me some feedbacks both on the design and on the implementations.

Contributor

matope commented Sep 3, 2017

@bradfitz Hi, now that Go1.9 has been released, I'd like to proceed this issue.

For now, I've proposed 2 CLs and they're in review. One is for net/http and the other is for x/net/http2.

I think at least they worked as a POC to show that we're able to unblock pending I/Os on Body by using Clsoe(). But I'm not 100% sure if closing TCP connections by using Close() is a good design for HTTP/1.1 Bodies. (Would it cause unacceptable incompatibilities on its semantics?) I'd be happy if you could give me some feedbacks both on the design and on the implementations.

@tombergan

This comment has been minimized.

Show comment
Hide comment
@tombergan

tombergan Sep 5, 2017

Contributor

@matope I would like to have the context API explicitly rejected by the wider Go team before moving forward with the io.Closer approach. Otherwise we might some day end up with two APIs for the same problem, which will be ugly. I will try to write up a proposal this week.

Concurrently, perhaps we can short-circuit that conversation: Does anyone have a use case that cannot be solved with ReadContext / WriteContext, but can be solved by making net/http's ResponseWriter implement io.Closer?

Contributor

tombergan commented Sep 5, 2017

@matope I would like to have the context API explicitly rejected by the wider Go team before moving forward with the io.Closer approach. Otherwise we might some day end up with two APIs for the same problem, which will be ugly. I will try to write up a proposal this week.

Concurrently, perhaps we can short-circuit that conversation: Does anyone have a use case that cannot be solved with ReadContext / WriteContext, but can be solved by making net/http's ResponseWriter implement io.Closer?

@bradfitz

This comment has been minimized.

Show comment
Hide comment
@bradfitz

bradfitz Sep 5, 2017

Member

I still don't like Close.

I'd rather see a function in the http package that returns a non-interface value (so we can add things to it later) given an *http.Request, with type name hopefully better than "HandlerHelper" or "HandlerWriter". (since ResponseWriter is taken already)

Then we could put methods for Flush (returning an error!) and Hijack and bytes written accessors and close and reset stream etc.

Member

bradfitz commented Sep 5, 2017

I still don't like Close.

I'd rather see a function in the http package that returns a non-interface value (so we can add things to it later) given an *http.Request, with type name hopefully better than "HandlerHelper" or "HandlerWriter". (since ResponseWriter is taken already)

Then we could put methods for Flush (returning an error!) and Hijack and bytes written accessors and close and reset stream etc.

@tombergan

This comment has been minimized.

Show comment
Hide comment
@tombergan

tombergan Sep 5, 2017

Contributor

I understand not wanting to add yet-another-interface that users must cast ResponseWriter to, whether that interface is io.Closer, ctxio.Writer, or whatever. Adding a concrete struct that we can throw new methods on seems like a good idea. But it doesn't directly solve the problem raised by this bug.

How should callers apply per-Read/Write cancellations and deadlines: Should we add a Close or Cancel method as @matope proposed; should we add context methods as I proposed; or something else? i.e., I'd separate the questions:

  1. What is the right API to solve this bug?
  2. Should that API be exposed as an interface implemented by net/http's ResponseWriter, or as a new concrete struct that is somehow linked to an http.ResponseWriter?

AFAICT, we haven't settled the first question yet.

Contributor

tombergan commented Sep 5, 2017

I understand not wanting to add yet-another-interface that users must cast ResponseWriter to, whether that interface is io.Closer, ctxio.Writer, or whatever. Adding a concrete struct that we can throw new methods on seems like a good idea. But it doesn't directly solve the problem raised by this bug.

How should callers apply per-Read/Write cancellations and deadlines: Should we add a Close or Cancel method as @matope proposed; should we add context methods as I proposed; or something else? i.e., I'd separate the questions:

  1. What is the right API to solve this bug?
  2. Should that API be exposed as an interface implemented by net/http's ResponseWriter, or as a new concrete struct that is somehow linked to an http.ResponseWriter?

AFAICT, we haven't settled the first question yet.

@bradfitz

This comment has been minimized.

Show comment
Hide comment
@bradfitz

bradfitz Sep 5, 2017

Member

I don't have time to read and consider the discussion above. I just wanted to say I'd heavily argue against new optional interfaces. I think the existing ones were probably a mistake and I'd like to stop the bleeding.

Member

bradfitz commented Sep 5, 2017

I don't have time to read and consider the discussion above. I just wanted to say I'd heavily argue against new optional interfaces. I think the existing ones were probably a mistake and I'd like to stop the bleeding.

@matope

This comment has been minimized.

Show comment
Hide comment
@matope

matope Sep 10, 2017

Contributor

@bradfitz Thank you for your feedback!

I think introducing a new helper structure rather than adding new optional interfaces would be a good idea. It would enable us to extend the functionalities without breaking backward compatibility or having a lot of function-specific-interfaces.

I'm interested in your plan and I'd like to see more detail on it. But I'm not gonna rush you.
I'm gonna stop working on my CLs until new proposal will come.

Also, if the new struct supports settings of per-Read/Write timeout like below, users don't have to implement their own TimeoutReaders which I described above. I expect @tombergan would like this API).

func (r *Request) HandlerHelper()  *HandlerHelper {}

// HandlerHelper does some stuff on HTTP connection and stream.
// I don't come up with a nice name.
type HandlerHelper struct{
  // unexporeted fields and methods
}

 // Close closes the request connection(on HTTP/1.1) or reset the request stream (on HTTP/2).
func (hh *HanlderHelper) Close() error {}

// SetReadTimeout sets per-Read timeouts for Request (Body) if not 0
func (hh *HanlderHelper) SetReadTimeout(d time.Duration) {}

// SetWriteTimeout sets per-Write timeouts for Response (Body) if not 0
func (hh *HanlderHelper) SetWriteTimeout(d time.Duration) {}
Contributor

matope commented Sep 10, 2017

@bradfitz Thank you for your feedback!

I think introducing a new helper structure rather than adding new optional interfaces would be a good idea. It would enable us to extend the functionalities without breaking backward compatibility or having a lot of function-specific-interfaces.

I'm interested in your plan and I'd like to see more detail on it. But I'm not gonna rush you.
I'm gonna stop working on my CLs until new proposal will come.

Also, if the new struct supports settings of per-Read/Write timeout like below, users don't have to implement their own TimeoutReaders which I described above. I expect @tombergan would like this API).

func (r *Request) HandlerHelper()  *HandlerHelper {}

// HandlerHelper does some stuff on HTTP connection and stream.
// I don't come up with a nice name.
type HandlerHelper struct{
  // unexporeted fields and methods
}

 // Close closes the request connection(on HTTP/1.1) or reset the request stream (on HTTP/2).
func (hh *HanlderHelper) Close() error {}

// SetReadTimeout sets per-Read timeouts for Request (Body) if not 0
func (hh *HanlderHelper) SetReadTimeout(d time.Duration) {}

// SetWriteTimeout sets per-Write timeouts for Response (Body) if not 0
func (hh *HanlderHelper) SetWriteTimeout(d time.Duration) {}
@FiloSottile

This comment has been minimized.

Show comment
Hide comment
@FiloSottile

FiloSottile Feb 14, 2018

Member

The current docs for Read[Header]Timeout say:

        // ReadTimeout is the maximum duration for reading the entire
        // request, including the body.
        //
        // Because ReadTimeout does not let Handlers make per-request
        // decisions on each request body's acceptable deadline or
        // upload rate, most users will prefer to use
        // ReadHeaderTimeout. It is valid to use them both.
        ReadTimeout time.Duration

        // ReadHeaderTimeout is the amount of time allowed to read
        // request headers. The connection's read deadline is reset
        // after reading the headers and the Handler can decide what
        // is considered too slow for the body.
        ReadHeaderTimeout time.Duration

Given this issue is not fixed yet, it sounds misleading to me. ReadHeaderTimeout does nothing against a malicious attacker, and "the Handler can decide what is considered too slow for the body" is inaccurate, not having a way to set deadlines.

Am I missing something?

Member

FiloSottile commented Feb 14, 2018

The current docs for Read[Header]Timeout say:

        // ReadTimeout is the maximum duration for reading the entire
        // request, including the body.
        //
        // Because ReadTimeout does not let Handlers make per-request
        // decisions on each request body's acceptable deadline or
        // upload rate, most users will prefer to use
        // ReadHeaderTimeout. It is valid to use them both.
        ReadTimeout time.Duration

        // ReadHeaderTimeout is the amount of time allowed to read
        // request headers. The connection's read deadline is reset
        // after reading the headers and the Handler can decide what
        // is considered too slow for the body.
        ReadHeaderTimeout time.Duration

Given this issue is not fixed yet, it sounds misleading to me. ReadHeaderTimeout does nothing against a malicious attacker, and "the Handler can decide what is considered too slow for the body" is inaccurate, not having a way to set deadlines.

Am I missing something?

@tv42

This comment has been minimized.

Show comment
Hide comment
@tv42

tv42 May 31, 2018

That suggests we need some sort of explicit means of aborting a response.

This seems to be now implemented, as panic(http.ErrAbortHandler), I think.

tv42 commented May 31, 2018

That suggests we need some sort of explicit means of aborting a response.

This seems to be now implemented, as panic(http.ErrAbortHandler), I think.

@rsc

This comment has been minimized.

Show comment
Hide comment
@rsc

rsc Jun 11, 2018

Contributor

Is http.ErrAbortHandler enough? Can we close this?

Contributor

rsc commented Jun 11, 2018

Is http.ErrAbortHandler enough? Can we close this?

@FiloSottile

This comment has been minimized.

Show comment
Hide comment
@FiloSottile

FiloSottile Jun 11, 2018

Member

So if I understand correctly the plan is to set a timeout on the main goroutine with a panic at the end, and do your I/O in a new goroutine. https://play.golang.org/p/J8IDixEn9y1

I see a few issues with this:

  1. It doesn't (currently) work. If you run the Play link above the I/O goroutine remains blocked on the slow reader after the handler panics. http.ResponseWriter.Write should unblock and return errors once ServeHTTP returns. This is arguably just a bug.
  2. It's convoluted and hard to discover for something (timeouts) that we would want most Handlers to manage well. It feels like net/http internals spilled over.
  3. It feels very easy to get wrong, losing track of a goroutine, or something like that.
  4. It requires special behavior from the http.ResponseWriter provider (see 1) that was not in the interface before.
Member

FiloSottile commented Jun 11, 2018

So if I understand correctly the plan is to set a timeout on the main goroutine with a panic at the end, and do your I/O in a new goroutine. https://play.golang.org/p/J8IDixEn9y1

I see a few issues with this:

  1. It doesn't (currently) work. If you run the Play link above the I/O goroutine remains blocked on the slow reader after the handler panics. http.ResponseWriter.Write should unblock and return errors once ServeHTTP returns. This is arguably just a bug.
  2. It's convoluted and hard to discover for something (timeouts) that we would want most Handlers to manage well. It feels like net/http internals spilled over.
  3. It feels very easy to get wrong, losing track of a goroutine, or something like that.
  4. It requires special behavior from the http.ResponseWriter provider (see 1) that was not in the interface before.
@bradfitz

This comment has been minimized.

Show comment
Hide comment
@bradfitz

bradfitz Jun 11, 2018

Member

@FiloSottile, as background, we're asking whether this is enough for now because there's no great place to put new options. We want to stop adding optional interfaces for http.ResponseWriter. They compose poorly.

If we make a "v2" version of net/http, we can fix this (and many other things), but we're wondering if we really need anything more for now.

If you feel strongly we do need this now, then what API do you propose?

Member

bradfitz commented Jun 11, 2018

@FiloSottile, as background, we're asking whether this is enough for now because there's no great place to put new options. We want to stop adding optional interfaces for http.ResponseWriter. They compose poorly.

If we make a "v2" version of net/http, we can fix this (and many other things), but we're wondering if we really need anything more for now.

If you feel strongly we do need this now, then what API do you propose?

@bradfitz

This comment has been minimized.

Show comment
Hide comment
@bradfitz

bradfitz Jun 12, 2018

Member

I was about to propose a hypothetical API but I realized I wasn't sure whether I had the right bug so I just tried to reread this whole bug to find the problem statement and I'm mostly seeing solutions.

Can somebody summarize the problem we're trying to solve here?

Member

bradfitz commented Jun 12, 2018

I was about to propose a hypothetical API but I realized I wasn't sure whether I had the right bug so I just tried to reread this whole bug to find the problem statement and I'm mostly seeing solutions.

Can somebody summarize the problem we're trying to solve here?

@FiloSottile

This comment has been minimized.

Show comment
Hide comment
@FiloSottile

FiloSottile Jun 12, 2018

Member

The shortest tl;dr is that there is no way to set timeouts based on the endpoint or on the client, but only server-wide, nor there is any way to update timeouts while serving a response based on the changing circumstances.

The reason one wants to set timeouts is to protect against clients going away or being intentionally slow. A reason one might want to exempt an endpoint is for example if that handler is serving streaming content or long downloads. A reason one might want to exempt a client is for example following authentication.

#16100 (comment) is a good practical example. I feel like the first two paragraphs of my original report are also not solution-oriented.

Member

FiloSottile commented Jun 12, 2018

The shortest tl;dr is that there is no way to set timeouts based on the endpoint or on the client, but only server-wide, nor there is any way to update timeouts while serving a response based on the changing circumstances.

The reason one wants to set timeouts is to protect against clients going away or being intentionally slow. A reason one might want to exempt an endpoint is for example if that handler is serving streaming content or long downloads. A reason one might want to exempt a client is for example following authentication.

#16100 (comment) is a good practical example. I feel like the first two paragraphs of my original report are also not solution-oriented.

@bradfitz

This comment has been minimized.

Show comment
Hide comment
@bradfitz

bradfitz Jun 12, 2018

Member

Okay, thanks.

If the client goes away ("laptop lid closes in a coffee shop" case), our default TCP keep-alive detection will kill the connection after some time. So I'm less worried about that.

But having different policies based on authentication is a good example.

I imagine this would be equally about read timeouts in addition to write timeouts.

How about something like this for an API: (ignoring the names)

package http // net/http

func NewHandlerController(w http.ResponseWriter, r *http.Request) *HandlerController { ... }

type HandlerController struct { /* opaque */ }

// SetFoo changes the foo deadline and reports whether it was successfully adjusted.
func (hc *HandlerController) SetFoo(time.Time) bool { ... }
func (hc *HandlerController) SetBar(time.Duration) bool { ... }

And then we're largely insulated from the mechanism we use to implement it over time. If we want to do private magic on the ResposneWriter interface, we can. If we want to do magic private context values, we can. But the user gets concrete types.

I'd start with that (for Go 1.12) and implement it for HTTP/1 and HTTP/2 only and not provide a way for others to implement it for their own Servers/ResponseWriters right away, knowing that this API would let us do so later.

I'm not really a fan of the name HandlerController name, though, but it's verbose here as an example.

Member

bradfitz commented Jun 12, 2018

Okay, thanks.

If the client goes away ("laptop lid closes in a coffee shop" case), our default TCP keep-alive detection will kill the connection after some time. So I'm less worried about that.

But having different policies based on authentication is a good example.

I imagine this would be equally about read timeouts in addition to write timeouts.

How about something like this for an API: (ignoring the names)

package http // net/http

func NewHandlerController(w http.ResponseWriter, r *http.Request) *HandlerController { ... }

type HandlerController struct { /* opaque */ }

// SetFoo changes the foo deadline and reports whether it was successfully adjusted.
func (hc *HandlerController) SetFoo(time.Time) bool { ... }
func (hc *HandlerController) SetBar(time.Duration) bool { ... }

And then we're largely insulated from the mechanism we use to implement it over time. If we want to do private magic on the ResposneWriter interface, we can. If we want to do magic private context values, we can. But the user gets concrete types.

I'd start with that (for Go 1.12) and implement it for HTTP/1 and HTTP/2 only and not provide a way for others to implement it for their own Servers/ResponseWriters right away, knowing that this API would let us do so later.

I'm not really a fan of the name HandlerController name, though, but it's verbose here as an example.

@FiloSottile

This comment has been minimized.

Show comment
Hide comment
@FiloSottile

FiloSottile Jun 12, 2018

Member

LGTM, although I would return an error (Set[Read|Write]Deadline(time.Time) error) so that we can communicate why the feature is not available.

Member

FiloSottile commented Jun 12, 2018

LGTM, although I would return an error (Set[Read|Write]Deadline(time.Time) error) so that we can communicate why the feature is not available.

@edaniels

This comment has been minimized.

Show comment
Hide comment
@edaniels

edaniels Jul 15, 2018

Contributor

Pulling together a few comments over time here. Given the ResponseWriter is what currently let's us hijack the underlying connection, would it be sufficient to start having net/http's ResponseWriterimplementio.Closerand a newhttp.DeadlineController` with methods:

type DeadlineController interface {
  SetReadDeadline(time.Duration)
  SetWriteDeadline(time.Duration)
}

This would be in line with the other interfaces already in use and implemented by the underlying ResponseWriter from the server.

For a user wanting to implement Server-Side Events, they could do the following:

func streamHandler(w http.ResponseWriter, r *http.Request) {
	w.Header().Set("Content-Type", "text/event-stream")
	controller, ok := w.(http.DeadlineController)
	if !ok {
		panic("expected w to be a http.DeadlineController")
	}
	for  {
		controller.SetWriteDeadline(2*time.Second)
		w.Write([]byte("data: some piece of data that takes up to 2 seconds to write\n\n"))
		w.(http.Flusher).Flush()
		time.Sleep(time.Second)
	}
}

You could mix in io.Closer into that example or use the request.Context to signal the handler to stop the stream.

I do also like @tombergan's mention of contextified reads and writes as potentially new methods so as to avoid breaking the library.

With all that said, it feels like the right future approach is to keep functionality scoped to the ResponseWriter and not at a higher level (handler, router, or server) for the motivation of having the most flexibility and obvious/expected API.

Contributor

edaniels commented Jul 15, 2018

Pulling together a few comments over time here. Given the ResponseWriter is what currently let's us hijack the underlying connection, would it be sufficient to start having net/http's ResponseWriterimplementio.Closerand a newhttp.DeadlineController` with methods:

type DeadlineController interface {
  SetReadDeadline(time.Duration)
  SetWriteDeadline(time.Duration)
}

This would be in line with the other interfaces already in use and implemented by the underlying ResponseWriter from the server.

For a user wanting to implement Server-Side Events, they could do the following:

func streamHandler(w http.ResponseWriter, r *http.Request) {
	w.Header().Set("Content-Type", "text/event-stream")
	controller, ok := w.(http.DeadlineController)
	if !ok {
		panic("expected w to be a http.DeadlineController")
	}
	for  {
		controller.SetWriteDeadline(2*time.Second)
		w.Write([]byte("data: some piece of data that takes up to 2 seconds to write\n\n"))
		w.(http.Flusher).Flush()
		time.Sleep(time.Second)
	}
}

You could mix in io.Closer into that example or use the request.Context to signal the handler to stop the stream.

I do also like @tombergan's mention of contextified reads and writes as potentially new methods so as to avoid breaking the library.

With all that said, it feels like the right future approach is to keep functionality scoped to the ResponseWriter and not at a higher level (handler, router, or server) for the motivation of having the most flexibility and obvious/expected API.

@pd60193

This comment has been minimized.

Show comment
Hide comment
@pd60193

pd60193 Jul 27, 2018

The shortest tl;dr is that there is no way to set timeouts based on the endpoint or on the client, but only server-wide, nor there is any way to update timeouts while serving a response based on the changing circumstances.

I want to do this. I have an endpoint which exports a large file while others perform operations on smaller sized data. I would like to keep different timeouts for these two different endpoints.

pd60193 commented Jul 27, 2018

The shortest tl;dr is that there is no way to set timeouts based on the endpoint or on the client, but only server-wide, nor there is any way to update timeouts while serving a response based on the changing circumstances.

I want to do this. I have an endpoint which exports a large file while others perform operations on smaller sized data. I would like to keep different timeouts for these two different endpoints.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment