net/http: potential client transport lock contention on high CPU core count machines #41238
Labels
NeedsInvestigation
Someone must examine and confirm this is a valid issue and not a duplicate of an existing one.
Performance
Milestone
What version of Go are you using (
go version
)?Does this issue reproduce with the latest release?
Yes
What operating system and processor architecture are you using (
go env
)?OS: Fedora/YUM based OS (RHEL, CentOS, OEL, etc)
Architecture: amd64
Kernel: Reproduced on anything from 4.1 to 5.8
go env
OutputWhat did you do?
While writing an extremely basic HTTP load tool, I noticed some funky behavior when re-using the *http.Client (and it's embedded *http.Transport).
The tool prepares a *http.Client and a *http.Request, spins up multiple goroutines that clone the *http.Request (as *http.Requests aren't supposed to be used in this concurrent fassion) and uses the *http.Client (which is re-usable concurrently) to send as many requests per second as possible at a server of the user's choice.
Here is a hacky, simplified and shortened but functioning example that tries to showcase my observation. It sends requests to a URL specified by the command line and goes through 2 passes of 30sec each where the first pass is using a shared client/transport and the second pass is using a new client/transport for each worker.
Code Example
What did you expect to see?
The same RPS throughput for both scenarios in any environment
What did you see instead?
When running this on my local DEV machine (Intel(R) Core(TM) i5-8259U CPU @ 2.30GHz) against a server (Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz), I get the expected results:
Both approaches result in a similar RPS.
However, when I run the same code on a client that is the same as the server (Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz), I see a ~90% increase of RPS throughput when using a separate client/transport for each worker goroutine:
This happens independent of plain HTTP or HTTP-over-TLS (HTTPS).
Is this expected?
Thank you in advance for the time anyone takes to look into this.
The text was updated successfully, but these errors were encountered: