Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proxy connection buffer necessary? #7327

Open
flowchartsman opened this issue Jun 16, 2024 · 5 comments · May be fixed by #7424
Open

Proxy connection buffer necessary? #7327

flowchartsman opened this issue Jun 16, 2024 · 5 comments · May be fixed by #7424
Assignees

Comments

@flowchartsman
Copy link

flowchartsman commented Jun 16, 2024

While attempting to implement a proxied connection via configuration value (as opposed to as provided by env), I noticed the following type in internal/transport/proxy.go:

// To read a response from a net.Conn, http.ReadResponse() takes a bufio.Reader.
// It's possible that this reader reads more than what's need for the response and stores
// those bytes in the buffer.
// bufConn wraps the original net.Conn and the bufio.Reader to make sure we don't lose the
// bytes in the buffer.
type bufConn struct {
	net.Conn
	r io.Reader
}

func (c *bufConn) Read(b []byte) (int, error) {
	return c.r.Read(b)
}

The description seems to indicate that this type exists due to a concern that the *bufio.Reader wrapping the net.Conn will read more bytes than is necessary to fulfill the call to http.ReadResponse in doHTTPConnectHandshake and potentially consume some of the initial bytes of TLS exchange; however the code that serves the same purpose in http.Transport asserts that this won't happen since the TLS host will not respond until the conn is written to again, making the *bufio.Reader safe to throw away.

Has anyone ever actually observed the reader getting more than just the CONNECT response, or is this just an overabundance of caution? I may have missed something, but this doesn't seem too different from the way it's done in http.Transport.

I don't know how much it would affect throughput or anything, but it's certainly wasteful to have another buffered reader in between you and the net.Conn if you don't need it.

I removed the wrapper in favor of returning the conn directly from doHTTPConnectHandshake, and all of the tests still pass, though that's not necessarily compelling proof for a negative. I can try and make a stronger case if needed--just have to figure out if it can be effectively tested.

@aranjans aranjans self-assigned this Jun 17, 2024
@aranjans
Copy link
Contributor

aranjans commented Jul 1, 2024

Hi @flowchartsman, thanks for writing in. We are currently short of bandwidth, please feel free to send us a PR for this, will be happy to review. Thanks!

@arjan-bal
Copy link
Contributor

I'll try to figure out why this buffer was introduced and if we still need it by going though the code and commit history.

@arjan-bal
Copy link
Contributor

arjan-bal commented Jul 14, 2024

From my understanding, the standard library code is specifically for proxying https. So it can correctly assume that the server waits for the client to send the first packet as part of the TLS handshake after the http tunnel is established.

In the case of gRPC however, we can't assume that TLS is always used and that the client initiates communication. Users can provide their own channel encryption/authentication implementations using the Credentials API. If some Credentials implementation dictates that the server initiate the handshake, I believe its TCP packet may get lost if we discard the buffer.

@arjan-bal
Copy link
Contributor

I believe in most cases TLS is used, so maybe checking for r.Buffered() == 0 and returning the unwrapped conn will avoid the extra layer of buffering in many cases.

Will discuss this with other maintainers.

@arjan-bal
Copy link
Contributor

I've raised a PR with a unit test for the buffering behaviour and also to discard the buffer when possible: #7424

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants