Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cleanup references to Picker through dopts #397

Closed

Conversation

mwitkow
Copy link
Contributor

@mwitkow mwitkow commented Oct 11, 2015

While working on a Reverse Proxy for gRPC (https://github.com/mwitkow-io/grpc-proxy) I stumbled with the problem of getting a new ClientTransport and a new ClientStream in raw form.
That's because the Reverse Proxy skips framing and only operates on metadata of the call.

One of the ways to do it was to expose the Picker as an interface of ClientConn, which this CL is.
Alternatively, a NewTransport method could be added to ClientConn that hides the Picker completely.

@iamqizhao
Copy link
Contributor

I am not sure what you want exactly (can you elaborate?). But since picker
is a dial option passed by users, it seems it does not make sense to have a
getter for it.

On Sun, Oct 11, 2015 at 11:52 AM, Michal Witkowski <notifications@github.com

wrote:

While working on a Reverse Proxy for gRPC (
https://github.com/mwitkow-io/grpc-proxy) I stumbled with the problem of
getting a new ClientTransport and a new ClientStream in raw form.
That's because the Reverse Proxy skips framing and only operates on
metadata of the call.

One of the ways to do it was to expose the Picker as an interface of
ClientConn, which this CL is.
Alternatively, a NewTransport method could be added to ClientConn that

hides the Picker completely.

You can view, comment on, or merge this pull request online at:

#397
Commit Summary

  • cleanup references to Picker through dopts

File Changes

Patch Links:


Reply to this email directly or view it on GitHub
#397.

@mwitkow
Copy link
Contributor Author

mwitkow commented Oct 11, 2015

For reference: https://github.com/mwitkow-io/grpc-proxy/blob/master/proxy.go#L187

I can't rely on getting the Picker from the user, as the public interface for "dialing" is ClientConn. Thus, I made ClientConn the interface I expect the users of the gRPC proxy to use to specify the destinations of their RPCs.

Basically the problem I'm facing is there is no way of getting a transport.ClientTransport out of the ClientConn, which is needed for raw shoveling of data frames.

@iamqizhao
Copy link
Contributor

Exposing transport. CientTransport to Picker is not the way to go. Pick
could implement various load balancing policies such as round robin in
which case the ClientTransport you get from Picker() is meaningless. I
expect the proxy dev should use transport package directly and implement
the custom upper logic (may or may not be as same as what ClientConn does).

On Sun, Oct 11, 2015 at 12:22 PM, Michal Witkowski <notifications@github.com

wrote:

For reference:
https://github.com/mwitkow-io/grpc-proxy/blob/master/proxy.go#L187

I can't rely on getting the Picker from the user, as the public interface
for "dialing" is ClientConn. Thus, I made ClientConn the interface I
expect the users of the gRPC proxy to use to specify the destinations of
their RPCs.

Basically the problem I'm facing is there is no way of getting a
transport.ClientTransport out of the ClientConn, which is needed for raw
shoveling of data frames.


Reply to this email directly or view it on GitHub
#397 (comment).

@mwitkow
Copy link
Contributor Author

mwitkow commented Oct 11, 2015

I actually would really like to piggy-back on the Picker, as a combination of a Proxy + naming-backed pools of machines would be very powerful. A RR-ed ClientTransport I'd get from the Picker is exactly what I'd need for providing load balancing with no effort :)

The problem is that there is currently no way to get a ClientTransport either out of a Picker or a ClientConn. The logic currently is hidden behind NewClientStream that wraps it up in a bunch of private encoding abstractions.

Either one of the two would be immensely helpful:

  • ClientConn.Picker() -> Picker
  • ClientConn.NewTransport() -> (ClientTransport, error)

This means that the ClientConn which is the user-visible manifestation of a "gRPC backend" (if I understand the intention), with all the LB and backend healthchecking, would be able to return a transport.ClientTransport, for raw consumption.

@mwitkow
Copy link
Contributor Author

mwitkow commented Oct 21, 2015

@iamqizhao, can you PTAL at the issue? I'd really love to get this in so that the gRPC proxy doesn't need a patched gRPC client.

@iamqizhao
Copy link
Contributor

This won't be merged. Exposing Picker to users is not right way to proceed because calling Picker.Pick(...) will change the internal state of Picker. It works in your case because your use case skips the grpc.ClientConn layer which calls Picker.Pick(...). Your current code messes up the layering of gRPC. I have not got time to figure out a desirable solution for you. But the first glance lets me think you should make a transport Picker by yourself, which is expected for proxy-type use cases.

@mwitkow
Copy link
Contributor Author

mwitkow commented Oct 23, 2015

@iamqizhao I think there is a slight misunderstanding here.

In my implementation, I'm not skipping the grpc.ClientConn, I'm purposefully using it as an abstraction representing a "backend" (which IMHO fits the general gRPC approach). Why do you think a proxy should custom implementation of a transport Picker? I see no benefit, and mostly drawbacks of duplicated code, or dropping functionality. I understand you're busy, but would appreciate your thoughts about it so that the Proxy code is as reusable by others as possible.

As to the Picker, how about the other approach:
NewTransport(ClientConn) -> (ClientTransport, error), similarly to NewClientStream and Invoke, but returning the grpc.transport abstraction. This seems to me less messy then exposing a Picker, and in-line with the current public abstractions. What do you think?

@iamqizhao
Copy link
Contributor

I said you skip grpc .ClientConn because your code mostly on ClientTransport directly. In general, transport is the private package of gRPC-Go and should be used by users directly. Some special applications might need to operate on transport directly such as grpc proxy. For these applications, the developers should make their own upper layer control logic (counterpart of ClientConn). In principle, grpc.ClientConn is the entity to manage the underlying transport. You should either take it all or none. Trying to break it into pieces and taking advantage of part of them will make gRPC extremely hard to grow and maintain.

Your proposal is problematic because:
i) violate the above principle -- e.g., what if the ClientConn decides to tear down the ClientTransport returned by NewTransport at some point?
ii) it is not working for some load balancing policies. For example, for round robin, what ClientTransport should be returned for NewTransport given that the ClientConn may have hundreds of ClientTransport under it hood?

In terms of code duplication, I do not think it is a problem because this is dedicated for a special use case and is not unknown how much is duped when the code becomes stable. It is perfectly fine you can make it in your own repo and share it with other users with similar use cases. I ever thought to have a contrib directory in grpc-go repo for outside contributions and decided to give up due to the potential code quality and maintenance issues.

@iamqizhao
Copy link
Contributor

s/should be used/should NOT be used/

@iamqizhao
Copy link
Contributor

We can keep discussion here. Let me close this PR now to avoid confusing other people.

@karlkfi
Copy link

karlkfi commented Jul 14, 2016

I see that Picker.Pick was replaced by ClientConn.getTransport. Would it be reasonable to make getTransport public now that it's been refactored?
https://github.com/grpc/grpc-go/blob/master/clientconn.go#L424

@lock lock bot locked as resolved and limited conversation to collaborators Jan 19, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants