Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

net/http: setting a timeout on http requests that use TLS can result in excessive requests to server #59017

Open
mpoindexter opened this issue Mar 14, 2023 · 6 comments
Assignees
Labels
help wanted NeedsFix The path to resolution is known, but the work has not been done. Performance

Comments

@mpoindexter
Copy link

mpoindexter commented Mar 14, 2023

What version of Go are you using (go version)?

go version go1.20.2 darwin/amd64

Does this issue reproduce with the latest release?

Yes

What operating system and processor architecture are you using (go env)?

GOARCH="amd64"
GOOS="darwin"

What did you do?

When running a large number of concurrent requests to Amazon S3 with a request timeout set (either in the http.Client instance or a context directly set on the request), we noticed an unexpectedly large amount of long running requests. When we removed the timeout, the number of long running requests dropped. The long running requests were not directly caused by the timeout being hit - all requests completed in under the timeout.

We created a standalone program to reproduce the problem and added logging via httptrace. In the output from httptrace we observed a large number of requests that were reported with the error context cancelled to the TLSHandshakeDone callback in our trace. These requests did not cause failed requests as reported by the http client.

Digging into the http Transport code, it appears that when a connection is not immediately available for use in the connection pool, the runtime starts a race between obtaining a connection returned to the pool and dialing a new connection. In our case, the "obtain connection returned to the pool" was generally winning the race. The behavior on the losing side of the race differed depending on whether the request used a timeout or not. On requests without a timeout, the losing leg of the connection continued through the TLS handshake, and was then placed into the pool as a valid connection. On requests with a timeout, the losing leg was aborted mid-TLS-handshake due to the cancellation of the request context as the request completed using the connection that was returned to the pool.

The net result of this behavior was that whenever a request legitimately required a new connection to be established, it was often queued up (probably at the server end) behind a large number of TLS handshakes that would be cancelled in flight. This manifested as excessive time to complete the request and noticeably lower throughput.

What did you expect to see?

Client does not produce a large volume of aborted TLS handshakes to server.

What did you see instead?

Slowness caused by excessive TLS handshakes to server

I created a gist that reproduces this at https://gist.github.com/mpoindexter/f6bc9dac16290343efba17129c49f9d5. If you uncomment the timeout on line 56 you can see the throughput of the test drop and stall periodically, but if the timeout remains commented throughput remains steady.

To test the analysis above, I implemented a custom DialTLSContext function that did the TLS dial using a new context if a deadline was set on the original context. This resolved the problem.

func ApplyFix(transport *http.Transport) {
	dialer := &tls.Dialer{
		Config: transport.TLSClientConfig,
	}

	transport.DialTLSContext = func(ctx context.Context, network, addr string) (net.Conn, error) {
		deadline, hasDeadline := ctx.Deadline()
		var dialCtx context.Context
		if hasDeadline {
			timeout := deadline.Sub(time.Now())
			if timeout > 0 {
				var cancel context.CancelFunc
				dialCtx, cancel = context.WithTimeout(context.Background(), timeout)
				defer cancel()
			} else {
				dialCtx = ctx
			}
		} else {
			dialCtx = ctx
		}

		return dialer.DialContext(dialCtx, network, addr)
	}
}
@seankhliao seankhliao added Performance NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one. labels Mar 14, 2023
@seankhliao
Copy link
Member

cc @neild

@mark4z

This comment was marked as spam.

@mark4z
Copy link

mark4z commented Apr 4, 2023

same question when using nginx as the server

@neild
Copy link
Contributor

neild commented Apr 6, 2023

We shouldn't be using an individual request's context to dial a connection which may be reused for many requests, to avoid problems exactly like this.

I suspect that someone is relying on passing values down to the dial via the context, however, so we likely need to decouple the request context's values from its cancelation. (Conveniently, #40221 has been implemented now, so this is feasible.)

I'm not certain what the correct cancelation behavior is. Perhaps cancel a dial once there are no requests blocked on it? There might still be some access patterns where we repeatedly cancel an in-progress dial, but this would at least avoid the scenario where we cancel a dial call that a request was hoping to make use of.

@neild neild self-assigned this Apr 6, 2023
@neild neild added help wanted NeedsFix The path to resolution is known, but the work has not been done. and removed NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one. labels Apr 6, 2023
@AlexanderYastrebov
Copy link
Contributor

AlexanderYastrebov commented Jul 22, 2023

We shouldn't be using an individual request's context to dial a connection which may be reused for many requests

This would probably fix #50798 as well.
No, context.WithoutCancel(req.Context()) would retain a link to request context.

diff --git a/src/net/http/transport.go b/src/net/http/transport.go
index c07352b018..ce34ef448c 100644
--- a/src/net/http/transport.go
+++ b/src/net/http/transport.go
@@ -1349,7 +1349,7 @@ func (t *Transport) customDialTLS(ctx context.Context, network, addr string) (co
 func (t *Transport) getConn(treq *transportRequest, cm connectMethod) (pc *persistConn, err error) {
        req := treq.Request
        trace := treq.trace
-       ctx := req.Context()
+       ctx := context.WithoutCancel(req.Context())
        if trace != nil && trace.GetConn != nil {
                trace.GetConn(cm.addr())
        }

@jarreds
Copy link

jarreds commented Jan 29, 2024

Using the ApplyFix outlined by @mpoindexter, we saw a significant drop in the number of aborted TLS handshakes. Red line is the deploy of that fix:

image

Those graphed metrics are directly out of net/http/httptrace.

One thing that's not clear to me at the moment is if those dialed connections are pooled as idle connections, because metrics on the connection pool are not readily accessible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted NeedsFix The path to resolution is known, but the work has not been done. Performance
Projects
None yet
Development

No branches or pull requests

6 participants