Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

net/http: no way of manipulating timeouts in Handler #16100

Open
FiloSottile opened this issue Jun 17, 2016 · 73 comments
Open

net/http: no way of manipulating timeouts in Handler #16100

FiloSottile opened this issue Jun 17, 2016 · 73 comments
Labels
FeatureRequest NeedsDecision
Milestone

Comments

@FiloSottile
Copy link
Contributor

@FiloSottile FiloSottile commented Jun 17, 2016

A Handler has no way of changing the underlying connection Deadline, since it has no access to the net.Conn (except by maintaining a map from RemoteAddr to net.Conn via Server.ConnState, but it's more than anyone should need to do). Moreover, it can't implement a timeout itself because the Close method of the ResponseWriter implementation is not documented to unblock concurrent Writes.

This means that if the server has a WriteTimeout, the connection has a definite lifespan, and streaming is impossible. So servers with any streaming endpoints are forced not to implement timeouts at all on the entire Server.

A possible solution might be to expose the net.Conn in the Context. Another could be to allow interface upgrades to the SetDeadline methods on ResponseWriter. Yet another would be to make (*response).Close unblock (*response).Write.

@ianlancetaylor ianlancetaylor added this to the Go1.8 milestone Jun 17, 2016
@elithrar
Copy link

@elithrar elithrar commented Jun 24, 2016

Given that we store the *http.Server in the request context, making net.Conn available in the context via (e.g.) ConnContextKey could be an option. This could be opt-in via a field on the http.Server as stuffing the request context with things by default is not ideal.

@noblehng
Copy link

@noblehng noblehng commented Jul 1, 2016

I think this can be done with providing a custom net.Listener to (*http.Server).Serve.

That is embedding a *net.TCPListener and overwriting the Accept method, which return a custom net.Conn. The custom net.Conn will embed a *net.TCPConn and overwrite the Write method.

The overwritten Write method could reset the write deadline on every write, or use a atomic counter to reset the write deadline on some numbers/bytes of consecutively write. But for truly on demand write deadline resetting, one still need some way to do that on the higer level handler side.

Since a http/2 connection can do multiplexing, it would be helpful to have a set of timeouts for individual stream. When a stream hang on client, we could use those timeouts to release resources associated with that stream, this is not possible with setting deadlines on the lower level underlying connection.

@bradfitz
Copy link
Contributor

@bradfitz bradfitz commented Sep 1, 2016

@FiloSottile, we won't be exposing net.Conn to handlers, or let users explicitly set deadlines on conns. All public APIs need to consider both HTTP/1 and HTTP/2.

You propose many solutions, but I'd like to get a clear statement of the problem first. Even the title of this bug seems like a description of a lack of solution, rather than a problem that's not solvable.

I agree that the WriteTimeout is ill-specified. See also my comment about ReadTimeout here: #16958 (comment)

You allude to your original problem here:

So servers with any streaming endpoints are forced not to implement timeouts at all on the entire Server.

So you have an infinite stream, and you want to forcibly abort that stream if the user isn't reading fast enough? In HTTP/1, that means closing the TCP connection. In HTTP/2, that means sending a RST_STREAM.

I guess we need to define WriteTimeout before we make progress on this bug.

What do you think WriteTimeout should mean?

@thincal
Copy link

@thincal thincal commented Sep 29, 2016

Why not provide a timeout version of Read/Write ? Is there any reason ?

Request.Body.TimedRead(p []byte, timeout time.Duration)
Request.Body.TimedWrite(p []byte, timeout time.Duration)

My concrete case is:

case1 about read

client send data via. a POST API provided by server, but if the network disconnected during the server read request data via. request.Body.Read, the server will be blocked at Body.Read forever.

case2 about write

similar that it's server write data back to client.

So I need a timeout mechanism that server can be unblocked from this Read/Write.
This is very intuitive requirement, might be there are other solution already ?

Thanks.

@quentinmit quentinmit added the NeedsDecision label Oct 7, 2016
@bradfitz
Copy link
Contributor

@bradfitz bradfitz commented Oct 22, 2016

What if the mechanism to abort blocked read or write I/O is to just exit from the ServeHTTP function?

So then handlers can do their own timers before reads/writes and cancel or extend them as necessary. That implies that all such reads & writes would need to be done in a separate goroutine. And then if they're too slow and the timer fires before the hander's I/O goroutine completes, just return from ServeHTTP and we'd either kill the TCP connection (for http1) or do a RST_STREAM (for http2).

I suppose then the problem is there's a small window where the I/O might've completed just in the window between the timer firing and the ServeHTTP code exiting, so then once the http package sees ServeHTTP is done, it may not see any calls still in Read or Write, and might then not fail the connection as intended.

That suggests we need some sort of explicit means of aborting a response. http1 has that with Hijacker.

I've recommended in the past that people just panic, since the http package has that weird feature where it recovers panics.

Would that work? Do all your I/O in separate goroutines, and panic if they take too long?

We might get data races with http1 and/or http2 with that, but I could probably make it work if you like the idea.

@gopherbot
Copy link

@gopherbot gopherbot commented Oct 25, 2016

CL https://golang.org/cl/32024 mentions this issue.

gopherbot pushed a commit that referenced this issue Oct 26, 2016
…meout

Updates #14204
Updates #16450
Updates #16100

Change-Id: Ic283bcec008a8e0bfbcfd8531d30fffe71052531
Reviewed-on: https://go-review.googlesource.com/32024
Reviewed-by: Tom Bergan <tombergan@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
@bradfitz
Copy link
Contributor

@bradfitz bradfitz commented Nov 1, 2016

Deferring to Go 1.9 due to lack of reply.

@bradfitz bradfitz added this to the Go1.9 milestone Nov 1, 2016
@bradfitz bradfitz removed this from the Go1.8 milestone Nov 1, 2016
@c4milo
Copy link
Member

@c4milo c4milo commented Nov 1, 2016

@bradfitz I faced a similar problem when I started with Go. I had a service in Node using HTTP 1.1 with chunked transfer encoding, it implemented bidirectional streaming over HTTP 1.1. On each write or read, the deadline/timeout would get reset in order for the connection to remain opened. It worked on Node.js which uses libuv. This predated websockets and avoided long polling. It is a little known technique but is somewhat mentioned in https://tools.ietf.org/rfc/rfc6202.txt (numeral 3). I also remember Twitter's streaming API using it. Anyway, migrating my Node service to Go was not possible because of the use of absolute timeouts/deadlines.

So, I guess @FiloSottile is referring to a similar case. At least in my particular scenario, what I wanted to do was to be able to reset the connection's write and read timeout/deadline so that the connection remained open but still got closed if it became idle.

@drakkan
Copy link

@drakkan drakkan commented Nov 2, 2016

A timeout that restart on each write or read would be useful for my use case too and if implemented I could rewrite in go a legacy c++ application.

If the timeout could be set per request and not only globally would be a good plus

@FiloSottile
Copy link
Contributor Author

@FiloSottile FiloSottile commented Dec 15, 2016

@bradfitz Sorry for not following up on this one, it turned out to be more nuanced than I thought (HTTP/2, brain, HTTP/2 exists) and didn't have the time to look at it further.

Doing I/O in a goroutine instead of the timer sounds upside-down, but I don't have specific points to make against it. What would make more sense to me would be a way to cancel the whole thing, which would make Body.Read/ResponseWriter.Write return an error that would then be handled normally.

Question, without such a mechanism, how is the user supposed to timeout a read when using ReadHeaderTimeout?

    // ReadHeaderTimeout is the amount of time allowed to read
    // request headers. The connection's read deadline is reset
    // after reading the headers and the Handler can decide what
    // is considered too slow for the body.

@FiloSottile
Copy link
Contributor Author

@FiloSottile FiloSottile commented Dec 15, 2016

One possible solution would be for Request.Body to be documented to allow Close concurrent with Read, and not to panic on double Close, and for ResponseWriter to be upgradeable to io.Closer.

@peter-mogensen
Copy link

@peter-mogensen peter-mogensen commented Dec 16, 2016

For what it's worth ... I think we have a related use case. (only for reading requests and not for writing)

We have some slow clients doing large PUT requests - sometime on unstable connections which dies.
But we would like to allow these PUT requests as long as there is actually progress and Read() returns data. Preferably only on the endpoints where that should be allowed.

Currently, though we provide a net.Listener/net.Conn which sets Deadline on every Read/Write ... but that seems not a viable solution since it would interfere with timeouts set by net/http.Server.

@peter-mogensen
Copy link

@peter-mogensen peter-mogensen commented Feb 22, 2017

Would that work? Do all your I/O in separate goroutines, and panic if they take too long?

The problem is defining what is "too long".
Often what you want is not to kill long running IO in absolute terms, but to kill "too slow" connections.
I don't want to kill a long running PUT request as long as it's actually transferring data. But I would kill a request only sending 1 byte/second.

These demands for IO activity should (for HTTP) only be enforced during ConnState stateActive (and stateNew). It would be OK for a connection to not have any IO activity in stateIdle.

I've been doing some experiments setting deadlines on connections - which is defeated by tls.Conn hiding the underlying connection object from external access.
Also trying to have a reaper go-routine Close connections with no IO activity. ... which becomes equally messy, although not impossible. (**)

Being able to set demands for IO activity during HTTP stateActive - or a more general way to do this for net.Conn without overwriting deadlines set by other API and regardless of whether the conn object is wrapped in TLS. - would be nice :)

** PS: ...much of the complexity comes from not being able to access the underlying connection in a crypto/tls.Conn object. I can understand why not exposing the connection to a Handler, but in the ConnState callback, is there any reason not to allow getting the underlying connection for a tls.Conn?

@pwaller
Copy link
Contributor

@pwaller pwaller commented Feb 22, 2017

Just to throw in my 2¢, I also have personally hit the need to set timeouts on infinite connections, and have met someone at a go meetup who independently brought up the same problem.

In terms of API, why can't the underlying ResponseWriter implement Set{Read,Write}Deadline? It seems to me that they could happily do the write thing of sending STREAM_RST for http2. Just because it happens to implement the same API you'd use to timeout TCP connections doesn't mean it has to time out the TCP connection itself if there is another layer that can be used.

I agree with @FiloSottile's comment about the separate goroutine seeming upside-down. Though I sort of like it, it uses go's primitives in a clever non-obvious way which makes me feel nervous and seems like it would be hard to document.

@peter-mogensen
Copy link

@peter-mogensen peter-mogensen commented Feb 22, 2017

Having ResponseWriter have Set*Deadline() would only solve to OP problem of writing data to the client. - not the problem of stopping long running PUT which don't really transfer data at any satisfying rate.

@pwaller
Copy link
Contributor

@pwaller pwaller commented Feb 22, 2017

How about (*http.Request) or (*http.Request).Body implementing Set*Deadline?

@peter-mogensen
Copy link

@peter-mogensen peter-mogensen commented Feb 22, 2017

If that can be implemented without interfering with other deadlines, like ReadTimeout, then it would be interesting to try.

@yonderblue
Copy link

@yonderblue yonderblue commented Mar 3, 2017

@bradfitz Is it expected with the current master of the http2 pkg that when panicing out of a handler and a left behind goroutine blocked on a body read or response writer write, for a panic to occur that is internal to the http2 pkg? (existing TimeoutHandler would have same issue). Or is the behavior your referring to what would be made safe if that is the chosen route, but doesn't exist today?

@adg
Copy link
Contributor

@adg adg commented Mar 10, 2017

We ran into this issue on our project: upspin/upspin#313

We had our timeouts set quite low, as per @FiloSottile's recommendations but our users regularly send 1MB POST requests, which not all connections can deliver in a short time.

We would like to be able to set the timeout after reading the request headers. This would let us increase the timeouts for authenticated users only (we don't mind if our own users want to DoS us; at least we'll know who they are), but give unauthenticated users a very short timeout.

cc @robpike

@aead
Copy link
Contributor

@aead aead commented Jan 23, 2020

I (again) run into the issue of specifying handler-specific timeouts.

@mikelnrd AFAICS your approach does not work since the connection you get from ConnContext func(ctx context.Context, c net.Conn) context.Context is the TCP / TLS connection. However, the write timeout closes the (private) HTTP connection. At least I was not able to get this working - either all connections stick around forever or get closed after the WriteTimeout...

However, I may have found a similar work-a-round:

ConnContext: func(ctx context.Context, c net.Conn) context.Context {
	writeTimeout, cancelWriteTimeout := context.WithTimeout(ctx, 10*time.Second)
	go func() {
		defer cancelWriteTimeout() // Release resources
		_ = <-writeTimeout.Done() // Wait for the timeout or the timeout cancel
		if err := writeTimeout.Err(); err == context.DeadlineExceeded {
		     c.Close() // Only close the connection in case of exceeded deadline - not in case of cancellation
	       }
	}()
	return context.WithValue(ctx, ctx.Value(http.ServerContextKey), cancelWriteTimeout)
},

Here, I create a go routine that waits for the writeTimeout. If the deadline exceeds, we close the connection since that's the purpose of the write timeout.
However, if the timeout is canceled then we don't close the connection. This allows handler functions which want to opt-out of write timeouts call the cancelWriteTimeout function - which they can access via http.Request.Context()

At the moment this approach has the problem that if a handler function returns (e.g. the request has been handled successfully) the write-timeout go routine will stick around until the timeout elapses.
(Unfortunately, we can't use a select with <-ctx.Done() here - See: ListenAndServe implementation)

Therefore, I wrap the server.Handler with:

type cancelWriteTimeoutHandler struct{ *http.ServeMux }

func (h cancelWriteTimeoutHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
	defer func() {
		ctx := r.Context()
		f := ctx.Value(ctx.Value(http.ServerContextKey))
		if f, ok := f.(context.CancelFunc); ok {
			f()
		}
	}()
	h.ServeMux.ServeHTTP(w, r)
}

This handler will cancel the write timeout when the request has been handled. This ensures that the write-timeout go routine only lives as long as the request.
For a minimal example see: https://play.golang.org/p/ZOXeHvmqznZ
A cleaner solution would be nice but this seems to work 🤞

@perillo
Copy link
Contributor

@perillo perillo commented Feb 21, 2020

nginx have the send_timeout option: https://nginx.org/en/docs/http/ngx_http_core_module.html#send_timeout

Sets a timeout for transmitting a response to the client. The timeout is set only between two successive write operations, not for the transmission of the whole response. If the client does not receive anything within this time, the connection is closed.

This can be implemented with a new Server.SendTimeout field.

@tv42
Copy link

@tv42 tv42 commented Feb 21, 2020

@perillo Settings for the whole server are explicitly not what this issue asks for. There's a need for handler-specific timeouts.

(Secondly, the idea of a write-per-timeout is still open to slowloris abuse)

@lauri-elevant
Copy link

@lauri-elevant lauri-elevant commented Jun 19, 2020

This issue needs attention.

There are many reasonable use cases, like different policies depending on authentication or varied types of requests. In my case, a single service is supposed to usually serve small responses, but sometimes after authentication and understandind the request, it turns out that the file to be served is huge and might take some legitimate user hours to download.

So you have to balance between:

  1. Setting a "high enough" write timeout for most users to succeed
  2. Setting a "low enough" write timeout to not be totally exposed to a slow lori attack
    So you end up with an intersection of two very bad choices.

It is hard to comprehend how there is no way to keep a legitimate connection going, while giving unauthenticated users a proper timeout. I cannot understand how people are able to overlook this and expose Go services to the internet relying on net/http.

Is there any acceptable workaround besides having a reverse proxy babysitting the Go service? Or another package to drop in place of net/http?

arp242 added a commit to arp242/goatcounter that referenced this issue Jul 17, 2020
When using GoatCounter directly internet-facing it's liable to keep
connections around for far too long, exhausting the max. number of open
file descriptors, especially with "idle" HTTP/2 connections which,
unlike HTTP/1.1 Keep-Alive don't have an explicit timeout.

This isn't much of a problem if you're using a proxy in front of it, as
most will have some timeouts set by default (unlike Go, which has no
timeouts at all by default).

For the backend interface, keeping a long timeout makes sense; it
reduces overhead on requests (TLS setup alone can be >200ms), but for
the /count request we typically want a much shorter timeout.

Unfortunately, configuring timeouts per-endpoint isn't really supported
at this point, although some possible workarounds are mentioned in [1],
it's all pretty ugly.
We can add "Connection: close" to just close the connection, which is
probably much better for almost all cases than keeping a connection open
since most people only visit a single page, and keeping a connection
open in the off-chance they click somewhere again probably isn't really
worth it.

And setting *any* timeout is better than setting *no* timeout at all!

---

In the email conversation about this, the other person mentioned that
some connections linger for hours, so there *may* be an additional
problem either in the Go library or in GoatCounter, since this is a
really long time for a connection to stay open. Then again, it could
also be weird scripts or malicious users 🤔

[1]: golang/go#16100
arp242 added a commit to arp242/goatcounter that referenced this issue Jul 17, 2020
When using GoatCounter directly internet-facing it's liable to keep
connections around for far too long, exhausting the max. number of open
file descriptors, especially with "idle" HTTP/2 connections which,
unlike HTTP/1.1 Keep-Alive don't have an explicit timeout.

This isn't much of a problem if you're using a proxy in front of it, as
most will have some timeouts set by default (unlike Go, which has no
timeouts at all by default).

For the backend interface, keeping a long timeout makes sense; it
reduces overhead on requests (TLS setup alone can be >200ms), but for
the /count request we typically want a much shorter timeout.

Unfortunately, configuring timeouts per-endpoint isn't really supported
at this point, although some possible workarounds are mentioned in [1],
it's all pretty ugly.
We can add "Connection: close" to just close the connection, which is
probably much better for almost all cases than keeping a connection open
since most people only visit a single page, and keeping a connection
open in the off-chance they click somewhere again probably isn't really
worth it.

And setting *any* timeout is better than setting *no* timeout at all!

---

In the email conversation about this, the other person mentioned that
some connections linger for hours, so there *may* be an additional
problem either in the Go library or in GoatCounter, since this is a
really long time for a connection to stay open. Then again, it could
also be weird scripts or malicious users 🤔

[1]: golang/go#16100
@hrissan
Copy link

@hrissan hrissan commented Dec 3, 2020

My project also needs per-handler timeouts. So I can do something like

req.SetReadTimeout()
req.SetWriteTimeout()

in handler after classifying incoming request.

@networkimprov
Copy link

@networkimprov networkimprov commented Dec 4, 2020

@bradfitz @fraenkel @FiloSottile could any of you file a specific proposal from the above discussion in a new issue?

@karaatanassov
Copy link

@karaatanassov karaatanassov commented Mar 26, 2021

For what it is worth I seem to have found a simple workaround. Idea is to use the response writer from a separate go routine that is linked with a channel to the http handler. This allows the handler to to return and close the request/response when slow consumer is detected or the connection breaks.

See https://github.com/karaatanassov/go_http_write_timeout/blob/989c390106c8344974b72d2207ff7de61357b6ac/main.go#L14 for example

func handler(w http.ResponseWriter, r *http.Request) {
	log.Print("Request received.")
	defer wg.Done()
	defer log.Print("Request done.")
	ctx := r.Context() // If we generate error consider context.WithCancel
	publisherChan := streamPublisher(ctx)
	resultChan := make(chan []byte)
	go writeResult(resultChan, w)
	for value := range publisherChan {
		select {
		case resultChan <- value:

		case <-r.Context().Done(): // Client has closed the socket.
			log.Print("Request is Done.")
			close(resultChan) // Close the result channel to exit the writer.
			return
		case <-time.After(1 * time.Second): // The socket is not writeable in given timeout, quit
			log.Print("Output channel is not writable for 1 second. Close and exit.")
			close(resultChan) // Close the result channel to exit the writer.
			return
		}
	}
}

func writeResult(resultChan chan []byte, w http.ResponseWriter) {
	defer wg.Done()
	for r := range resultChan {
		w.Write(r)
	}
	log.Print("Write Result is done.")
}

In my observation the blocked Write() operation terminates 5 seconds after the http handler function returns.

I am not sure if this is safe for production so will pay with it. But it seems the situation with this blocking Write() call with no timeout is not as desperate.

@costela
Copy link
Contributor

@costela costela commented Mar 29, 2021

@karaatanassov this is a bit off-topic, but just avoid problems for the people who find your snippet: unless I'm overlooking something, it unfortunately doesn't really solve the underlying problem. It just "sweeps it under the rug", so to speak. Your handler will return, but the writeResult goroutine will still be there for as long as w.Write blocks (e.g. because of a slow reader). This means the underlying connection is still open, so resources are still being consumed by the client.

@karaatanassov
Copy link

@karaatanassov karaatanassov commented Mar 29, 2021

@costela go check out the git project. It actually releases the w.Write().

It is a complete solution at least in my test. I want to do experiments with http/1 and http/2 as the underlying systems are quite different.

It is kind of logical solution given that the only way to tell go to close the response is by returning from the handler function. The http.ResponseWriter has no Close() method as in other languages.

@ItalyPaleAle
Copy link

@ItalyPaleAle ItalyPaleAle commented Mar 29, 2021

(sorry if partly OT)

go check out the git project.

@karaatanassov you might need to add a LICENSE to your project, otherwise no one should be able to leverage your code as example

@tv42
Copy link

@tv42 tv42 commented Mar 29, 2021

@karaatanassov Your code is buggy because

A ResponseWriter may not be used after the Handler.ServeHTTP method has returned.

https://golang.org/pkg/net/http/#ResponseWriter

@karaatanassov
Copy link

@karaatanassov karaatanassov commented Mar 30, 2021

@karaatanassov Your code is buggy because

A ResponseWriter may not be used after the Handler.ServeHTTP method has returned.

https://golang.org/pkg/net/http/#ResponseWriter

Good point. The code technically is not using the ResponseWriter after the http handler has returned. Exactly the opposite it returns as to stop using it. I agree it is not the cleanest thing. It is one way to end response by return from the handler. So that is why I used it.

A cleaner way could be to use Request.WithContext to create cancel-able request. That seems cleaner and indeed recommended in the http module.

I will still keep the ResponseWriter.Write() in different go routine though as there is not non-blocking version of that.

PS I have updated the example to use cancel-able Request and not return from the handler before the response is released. https://github.com/karaatanassov/go_http_write_timeout

@costela
Copy link
Contributor

@costela costela commented Mar 30, 2021

@karaatanassov

go check out the git project. It actually releases the w.Write()

I'm pretty sure it doesn't 🙅
Your code only looks like it's working because your calls to w.Write are writing very small chunks. You are effectively making use of socket buffers, meaning w.Write is effectively non-blocking until the buffer is full. Try out this slightly modified version of your code (without the reader). Start it and run a slow reader against it (e.g. curl -Ns http://localhost:8080 | pv -q -L 1). You'll see a bunch of "writing" output, until the buffer is full. Then you'll see your handler return, but the writeResult goroutine will remain active for as long as the slow reader keeps slowly reading.
So this is unfortunately not a solution to the original problem, since the connection is still open.

Also, @tv42's comment is correct and should be reason enough to avoid this solution. Your code most definitely is using the ResponseWriter after the handler returns.

But let's try not to derail this issue any further. If you want, we can keep talking about this on the linked gist's comments.

@karaatanassov
Copy link

@karaatanassov karaatanassov commented Mar 31, 2021

To conclude this there is no way to end requests with pending write on Linux and Windows. The workarounds I suggested work on MacOS only.

It seems a legitimate bug that http requests with pending write cannot be completed/cancelled. This makes go servers precarious for internet use. It is not only timeout that is missing.

@peter-mogensen
Copy link

@peter-mogensen peter-mogensen commented Mar 31, 2021

Well ... not in any elegant way.
But I do have a workaround (unfortunately using an unsafe hack to support tls connections) which allow a monitoring go-routine to decide when to call Close() on the underlying net.Conn.
It's not ideal, but it works for, at least, HTTP/1.x

@chrispassas
Copy link

@chrispassas chrispassas commented Apr 26, 2021

I believe I've run into this same issue. If while looping over data to write back to a user they disconnect often the context will not save me from an infinite block on Write().

Example:

func (a *App) Foo(w http.ResponseWriter, r *http.Request) {
    ctx := r.Context()

    for x:=0;x<100;x++ {
        select {
        case <-ctx.Done():
            // User has disconnected, stopping
            return
        default:
            // Assmue user just disconnected 
            w.Write([]byte("example output")) //blocks until http server write timeout
        }    
    }
}

In my real project I might have to write back data for several minutes so I can't just set a low WriteTimeout.

Proposed solutions

  • Add w.WriteContext(context.Context, []byte) method
  • Add ResponeWriter SetWriteTimeout(duration)

@drakkan
Copy link

@drakkan drakkan commented May 1, 2021

Until Go supports something like this:

http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_read_timeout
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_send_timeout

if you need to read large requests or serve large responses (for example upload/download files), it is better to expose your Go application behind a reverse proxy like nginx

@drakkan
Copy link

@drakkan drakkan commented May 7, 2021

Based on the gist here, I ended up with this hack/workaround

type listener struct {
	net.Listener
	ReadTimeout  time.Duration
	WriteTimeout time.Duration
}

func (l *listener) Accept() (net.Conn, error) {
	c, err := l.Listener.Accept()
	if err != nil {
		return nil, err
	}
	tc := &Conn{
		Conn:                     c,
		ReadTimeout:              l.ReadTimeout,
		WriteTimeout:             l.WriteTimeout,
		ReadThreshold:            int32((l.ReadTimeout * 1024) / time.Second),
		WriteThreshold:           int32((l.WriteTimeout * 1024) / time.Second),
		BytesReadFromDeadline:    0,
		BytesWrittenFromDeadline: 0,
	}
	return tc, nil
}

// Conn wraps a net.Conn, and sets a deadline for every read
// and write operation.
type Conn struct {
	net.Conn
	ReadTimeout              time.Duration
	WriteTimeout             time.Duration
	ReadThreshold            int32
	WriteThreshold           int32
	BytesReadFromDeadline    int32
	BytesWrittenFromDeadline int32
}

func (c *Conn) Read(b []byte) (n int, err error) {
	if atomic.LoadInt32(&c.BytesReadFromDeadline) > c.ReadThreshold {
		atomic.StoreInt32(&c.BytesReadFromDeadline, 0)
		// we set both read and write deadlines here otherwise after the request
		// is read writing the response fails with an i/o timeout error
		err = c.Conn.SetDeadline(time.Now().Add(c.ReadTimeout))
		if err != nil {
			return 0, err
		}
	}
	n, err = c.Conn.Read(b)
	atomic.AddInt32(&c.BytesReadFromDeadline, int32(n))
	return
}

func (c *Conn) Write(b []byte) (n int, err error) {
	if atomic.LoadInt32(&c.BytesWrittenFromDeadline) > c.WriteThreshold {
		atomic.StoreInt32(&c.BytesWrittenFromDeadline, 0)
		// we extend the read deadline too, not sure it's necessary,
		// but it doesn't hurt
		err = c.Conn.SetDeadline(time.Now().Add(c.WriteTimeout))
		if err != nil {
			return
		}
	}
	n, err = c.Conn.Write(b)
	atomic.AddInt32(&c.BytesWrittenFromDeadline, int32(n))
	return
}

func newListener(network, addr string, readTimeout, writeTimeout time.Duration) (net.Listener, error) {
	l, err := net.Listen(network, addr)
	if err != nil {
		return nil, err
	}

	tl := &listener{
		Listener:     l,
		ReadTimeout:  readTimeout,
		WriteTimeout: writeTimeout,
	}
	return tl, nil
}

For my use case I use 60 seconds for http.Server and listener Read/Write timeouts. This way slowloris is not more an issue, hope this can help others and a that a proper solution will be included in http.Server directly. Please note that http.Server sets read and write deadlines internally so this workaround could break in future

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
FeatureRequest NeedsDecision
Projects
None yet
Development

No branches or pull requests