Description
The CL that disables connection pooling for HTTP2 creates a significant discontinuity in throughput when the server specifies a small number of maximum concurrent streams.
https://go-review.googlesource.com/c/net/+/53250
HTTP2 support is automatically enabled in Go under conditions not always specified by the developer. For example, configuration files often alternate between http and https endpoints. When using an http endpoint, go will use HTTP/1, whereas https endpoints use HTTP/2.
The HTTP/1 default transport will create as many connections as needed in the background. The HTTP2 default transport does not (although it used to).
As a result, HTTP1 endpoints get artificially high throughput when compared to HTTP2 endpoints that block waiting for more streams to become available instead of creating a new connection. For example, the AWS ALB limit the maximum number of streams to 128.
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html
This HTTP/2 client is blocked once it hits 128 streams and waits for more to become available. The HTTP/1 client does not. The performance of the HTTP/1 client is orders of magnitude faster as a result. This effect is annoying and creates a leaky abstraction in the net/http
package.
The consequence of this is that importers of the net/http
package now have to:
1.) Distinguish between HTTP and HTTPS endpoints
2.) Write a custom connection pool for the transport when HTTP2 is enabled
I think the previous pooling functionality should be restored.