You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Does this issue reproduce with the latest release?
What operating system and processor architecture are you using (go env)?
What did you do?
When running a large number of concurrent requests to Amazon S3 with a request timeout set (either in the http.Client instance or a context directly set on the request), we noticed an unexpectedly large amount of long running requests. When we removed the timeout, the number of long running requests dropped. The long running requests were not directly caused by the timeout being hit - all requests completed in under the timeout.
We created a standalone program to reproduce the problem and added logging via httptrace. In the output from httptrace we observed a large number of requests that were reported with the error context cancelled to the TLSHandshakeDone callback in our trace. These requests did not cause failed requests as reported by the http client.
Digging into the http Transport code, it appears that when a connection is not immediately available for use in the connection pool, the runtime starts a race between obtaining a connection returned to the pool and dialing a new connection. In our case, the "obtain connection returned to the pool" was generally winning the race. The behavior on the losing side of the race differed depending on whether the request used a timeout or not. On requests without a timeout, the losing leg of the connection continued through the TLS handshake, and was then placed into the pool as a valid connection. On requests with a timeout, the losing leg was aborted mid-TLS-handshake due to the cancellation of the request context as the request completed using the connection that was returned to the pool.
The net result of this behavior was that whenever a request legitimately required a new connection to be established, it was often queued up (probably at the server end) behind a large number of TLS handshakes that would be cancelled in flight. This manifested as excessive time to complete the request and noticeably lower throughput.
What did you expect to see?
Client does not produce a large volume of aborted TLS handshakes to server.
What did you see instead?
Slowness caused by excessive TLS handshakes to server
We shouldn't be using an individual request's context to dial a connection which may be reused for many requests, to avoid problems exactly like this.
I suspect that someone is relying on passing values down to the dial via the context, however, so we likely need to decouple the request context's values from its cancelation. (Conveniently, #40221 has been implemented now, so this is feasible.)
I'm not certain what the correct cancelation behavior is. Perhaps cancel a dial once there are no requests blocked on it? There might still be some access patterns where we repeatedly cancel an in-progress dial, but this would at least avoid the scenario where we cancel a dial call that a request was hoping to make use of.