Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Add support for custom transport (in-process, wasm) #906

Open
Merovius opened this issue Sep 22, 2016 · 56 comments
Open

Feature Request: Add support for custom transport (in-process, wasm) #906

Merovius opened this issue Sep 22, 2016 · 56 comments
Labels
P2 Type: Feature New features or improvements in behavior

Comments

@Merovius
Copy link

Say I want, for example

  • Developing a frontend and a backend as separate logical services, but deploy them in one binary. For traditional systems, having both run in a single binary will just make deployment, installation and running it much simpler. But I also want to have the option to easily split them apart for more complex usecases.
  • Use an embedded database (like leveldb), but I want, for debugging, to be able to peek and poke at it too. I can't just open it twice (it uses locking to prevent that). So instead I want to expose it as a grpc service for debugging. To reuse all my application-level code to interpret the raw bytes in the db, however, I also want that to only use the exposed grpc service. The service can then just open a leveldb, wrap it in a service and use that to talk to the database, whereas the debugger can connect to the service and use that client.

So, I want to be able to implement a FooServer and then connect to it from the same process. Of course, I could just listen on localhost and connect to that or something like that, but then I'd pay the penalty of serializing and deserializing everything and running the bytes through the kernel (which is significant when it's in the path of talking to your database, for example).

Instead, it would be cool if grpc allowed me to get a "local" connection, like func LocalPair() (*grpc.Client, *grpc.Server), which doesn't use a network at all and just directly passes the proto.Messages around.

I'd be willing to try to implement that myself, but first I wanted to ask if this is a use case you'd be willing to support.

@menghanl
Copy link
Contributor

This can be done by providing a custom dialer (WithDialer) to the client and a custom listener to the server.
In this way, you can specify the net.Conn connecting client and server to be a wrapper of something in memory.

@Merovius
Copy link
Author

Thanks for replying. However, even a loopback-connection will still require the serialization and de-serialization part of all the requests and metadata. That's kind of a waste if we already have the correctly typed things at either end.

So this feature request is specifically about bypassing all of that. I haven't yet found a way to achieve that. For example, even if I were to try to wrap it myself and use a codegen, I can't actually inspect the passed CallOptions (and thus any sent metadata, which makes it kind of a no-starter) due to the way it's set up. So, from what I can tell, support for this would need to come from the grpc package itself.

@iamqizhao
Copy link
Contributor

To bypass all of these overhead, we need an in-process-transport impl (in addition to http2Client and http2Server transport) in the transport package and the corresponding custom codec, listener, etc.. We did think about this previously but this is not a trivial change and we do not have enough hands covering this now unfortunately.

As an alternative, I think if you can add a wrapper on the top of the grpc generated code to switch on in-process and normal case you should probably achieve this without any changes in grpc library (I have not thought through all the detail and could be wrong though).

@mwitkow
Copy link
Contributor

mwitkow commented Oct 25, 2016

This would be really useful for @yugui and grpc-gateway use case.

@hsaliak
Copy link

hsaliak commented May 16, 2017

Added an enhancement label to this, but the work is not prioritized at the moment.
To flesh this out further, it may be best to submit a proposal that discusses a few implementation options as a language specific gRFC

@dennisdoomen
Copy link

In the .NET world, this is something to be expected. OWIN does it really well and allows you to build HTTP-enabled components and host them anywhere. If you host it in-process, all the requests happen completely in-memory.

@dfawley dfawley added Type: Feature New features or improvements in behavior and removed Type: Enhancement labels Aug 24, 2017
@jhump
Copy link
Member

jhump commented Aug 30, 2017

@iamqizhao wrote:

As an alternative, I think if you can add a wrapper on the top of the grpc generated code to switch on in-process and normal case you should probably achieve this without any changes in grpc library

Unfortunately, this is not necessarily the case. The main issue for an in-process implementation is that the CallOption stuff is totally opaque. So, even were there a way to communicate headers and trailers in-process, there is no way code can actually interact with these options -- at least not without forking grpc-go and adding said code to that package so it can interact with these un-exported types.

I brought this up on the mailing list some time ago:
https://groups.google.com/d/msg/grpc-io/NOfh5ESgnyc/RgDJe5g0EgAJ

I'm now re-visisting this issue because I've hit it again with something else I'm trying to do: a client interceptor to do client-side retries that can support "hedging". With hedging, the interceptor may issue a retry before the prior attempt completes -- triggered by a timeout vs. a failure. But header/trailer metadata for only the last/successful attempt should be used for the header and trailer call options. Since these types are totally opaque, it isn't possible for an interceptor to do what it needs to do. And if it just passes along the options, without any changes, to multiple concurrent attempts, it is both non-deterministic as well as unsafe/racy regarding how the client-provided metadata addresses will get set.

@dfawley
Copy link
Member

dfawley commented Aug 31, 2017

FWIW, retries and hedging are coming to gRPC-Go natively in a month or two. Relevant gRFC.

Regarding the initial issue, we have since created the bufconn package to at least bypass the network stack (message are still [de]serialized and everything goes through the http2 transport, but this should help with overhead somewhat). Otherwise, the team still doesn't have the bandwidth to take on an in-process transport any time soon.

@jhump
Copy link
Member

jhump commented Aug 31, 2017

Otherwise, the team still doesn't have the bandwidth to take on an in-process transport any time soon.

It would be nice if the API were at least amenable to a 3rd party library supplying this. Unfortunately, it is currently not due to #1495.

@Random-Liu
Copy link

Any updates on this?

/cc @stevvooe @crosbymichael I don't think bufconn is enough for us. We can only wrap all the grpc services now.

@jhump
Copy link
Member

jhump commented Mar 2, 2018

There are still a couple of issues that prevent a 3rd-party package from providing an in-process channel: #1495 and #1802,

However, that hasn't stopped me! Take a look at these:
https://godoc.org/github.com/fullstorydev/grpchan
https://godoc.org/github.com/fullstorydev/grpchan/inprocgrpc

The above issues do represent shortcomings though. They basically mean that call options are ignored by custom channel implementations (which means you can't get response headers and trailers from a unary RPC). And there is a similar issue on the server side: you cannot set response headers or trailers from a unary RPC implementation.

@Random-Liu, I don't really understand why you can't use bufconn though. It is similar enough in concept and construction that if you can't use it, you may not be able to use inprocgrpc either. The main advantage of inprocgrpc is that it doesn't incur serialization/de-serialization overhead or deal with HTTP/2 framing to wire the client up to the server. It uses a channel with a limited buffer size to achieve back-pressure, instead of using flow control windows in the HTTP/2 protocol.

@dfawley
Copy link
Member

dfawley commented Mar 2, 2018

@Random-Liu,

Any updates on this?

We were literally talking about this yesterday, so we haven't forgotten about it. However, it's still not at the top of our priority list. It will probably be another 6 months before we can get to it. Our other priorities right now are channelz, performance, and retry. It would be a fairly meaty project if an outside contributor wanted to take it on, but we would be able to advise.

The main advantage of inprocgrpc is that it doesn't incur serialization/de-serialization overhead

Skipping serialization/deserialization seems dangerous in Go, because proto messages are mutable. This means that if the server modifies the request message, then the client would see the result of that modification. (The same is true for streaming in either direction.) This shouldn't typically be happening, but it's a notable difference between a real server and inprocgrpc.

@jhump
Copy link
Member

jhump commented Mar 2, 2018

This means that if the server modifies the request message, then the client would see the result of that modification

No, this does not happen. The library avoids serialization/de-serialization, but it does copy.

@Random-Liu
Copy link

Random-Liu commented Mar 2, 2018

I don't really understand why you can't use bufconn though.

@jhump Because we may also want to get rid of the serialization/de-serialization, :)

With https://godoc.org/github.com/fullstorydev/grpchan/inprocgrpc, can I make the grpc server serve both remote requests and inproc requests at the same time?

@dfawley
Copy link
Member

dfawley commented Mar 2, 2018

The library avoids serialization/de-serialization, but it does copy.

Aha, I see now. So you do a proto.Clone() for proto messages, and a best-effort shallow copy otherwise (ref). FWIW, it may be better (if possible) to serialize and deserialize using the configured codec in the fallback case, in order to avoid this potential problem. But I also agree with the comment that says the fallback path should basically never be exercised, as most people use proto with grpc.

@joe-getcouragenow
Copy link

@robsonmeemo
Here is a hacky was to do it today: https://github.com/elliotpeele/golang-wasm-example

Its using websockets under the hood.

@tiwariashish86
Copy link

tiwariashish86 commented Sep 11, 2020

As per #241, it says there will be new interface provided that would make possible to add UDP transport as well.It would be really helpful for games. Is there any update on that? If not, is there anyway to use UDP client/transport underneath GRPC?

@dfawley
Copy link
Member

dfawley commented Sep 11, 2020

This was sadly deprioritized before I could get it finished. There's a bit-rotting prototype of all the client-side API changes and a shim in the existing transport here. Aside from one minor thing I'd like to change in the design (namely the use of Attributes instead of []interface{} for passing opaque parameters around), if someone wanted to pick this up, we would be willing to do reviews. The biggest remaining change from the prototype is that I don't want the transport to have the shim layer, but instead to directly expose the intended API.

EDIT: Also, the prototype does not have the server-side implementation done, which could be quite complex.

@bwplotka
Copy link

bwplotka commented Dec 15, 2020

Thanks for this discussion. I actually would like to +1 on this heavily, however our use case might be different.

We are leveraging gRPC a lot in Thanos projects and it helps enormously. The thing is that Thanos, similar to Google Monarch (if you are familiar), has this hierarchical node API strategy with gRPC in between.

Let's take for example one gRPC service:

/// Store represents API against instance that stores XOR encoded values with
// label set metadata (e.g Prometheus metrics).
service Store {
  rpc Info(InfoRequest) returns (InfoResponse);
  rpc Series(SeriesRequest) returns (stream SeriesResponse);
  rpc LabelNames(LabelNamesRequest) returns (LabelNamesResponse);
  rpc LabelValues(LabelValuesRequest) returns (LabelValuesResponse);
}

Within this, we have many implementations, but one of those is a simple "fanout" that fanouts and merges responses together and proxies them to the caller (called proxy)

Long story short, we have many cases when one microservice want to either:

  • Talk with proxy server logic via gRPC remotely
  • Talk with proxy server logic in the same process, merely using the gRPC server method as Go functions.

I am having a hard time understanding why there is no existing logic for this in the current gRPC generate code? Because in-process transport is one thing, but I don't care about all interceptors or sometimes about headers, trailers, and metadata when I invoke the server method in the same process. So something like ServerAsClient generated converter code would be easy to create, no?

What we use right now is something like this:

https://github.com/thanos-io/thanos/blob/326475560963983406b68c3a77cddcf7482e8f43/pkg/store/storepb/inprocess.go#L10

Can't we just ensure the Go gRPC generator will generate such well tested, benchmarked etc converter for each method? (it's trivial for unary rpcs, bit more complex for streaming). WDYT? 🤗

EDIT: Testing is another use case we leverage on a lot as well (thanks @glerchundi for reminding)

@glerchundi
Copy link

+1 to what @bwplotka is proposing. We're doing exactly that to cover two different use cases:

  1. Integration testing to mock in-process services
  2. Proxy servers for public <-> internal mappings

Thanks for raising 😊

@lootek
Copy link

lootek commented Dec 16, 2020

@bwplotka did you gave a try to https://github.com/fullstorydev/grpchan/tree/master/inprocgrpc mentioned earlier in this thread? It's not exactly what you need but would simplify things as you could then just bind channels? Or maybe even just communicate directly between the far-ends, bypassing the proxy part at all. I did not spend much time reading through your current code so I may miss the idea of your architecture, but I'm very successful using grpchan (kudos @jhump !) and just thought you might find it helpful as I did

@asutula
Copy link

asutula commented Jan 29, 2021

Been a while since looked at or thought about this issues, but I popped up in my notifications so I got caught up. This isn’t necessarily a solution, but it is interesting and could be learned from. I stumbled across the https://pkg.go.dev/google.golang.org/grpc/test/bufconn package this week and am using it in testing (as it was intended based on the package name). Anyway, it enables your client and service to run in a single process using an in memory transport. Works great (for testing at least). You can see my use if it here https://github.com/textileio/textile/blob/asutula/fil-rewards-bookkeeping/api/filrewardsd/service/service_test.go#L411

@bwplotka
Copy link

bwplotka commented Jan 30, 2021

Thanks all for your responses.

grpchan looks quite solid: https://github.com/fullstorydev/grpchan/blob/master/inprocgrpc/in_process.go
I think it's a good balance between not going into bytes (it does not marshal just passes message directly), but also solid transport of all trailers and metadata 🤗 I love it at first glance - we will take a look. In our case overhead matters, so we kind of care about each allocation here, so let's see. (: Thank You!

NOTE: I doubt in-process network for light e2e test purposes is a sensible request. For this, you want something like virtual net.Conn that allows gRPC communication using in-process memory. Anything lighter and without marshaling might not be ... e2e test (: I would argue if that is needed TBH. You can use thousands of extra sockets on CI systems e.g free GitHub Actions, so I don't see a problem with starting full gRPC server in a separate goroutine unless I am missing something 🤔

@menghanl menghanl changed the title Feature Request: Add support for In-Process transport Feature Request: Add support for custom transport (in-process, wasm) May 3, 2021
dmacvicar added a commit to codenotary/immudb that referenced this issue May 4, 2021
…PC client

Note that the generated gateway code has a comment about this:

// RegisterImmuServiceHandlerServer registers the http handlers for service ImmuService to "mux".
// UnaryRPC     :call ImmuServiceServer directly.
// StreamingRPC :currently unsupported pending grpc/grpc-go#906.
// Note that using this registration option will cause many gRPC library features to stop working. Consider using RegisterImmuServiceHandlerFromEndpoint instead.
func RegisterImmuServiceHandlerServer(ctx context.Context, mux *runtime.ServeMux, server ImmuServiceServer) error {
dmacvicar added a commit to codenotary/immudb that referenced this issue May 5, 2021
…PC client

Note that the generated gateway code has a comment about this:

// RegisterImmuServiceHandlerServer registers the http handlers for service ImmuService to "mux".
// UnaryRPC     :call ImmuServiceServer directly.
// StreamingRPC :currently unsupported pending grpc/grpc-go#906.
// Note that using this registration option will cause many gRPC library features to stop working. Consider using RegisterImmuServiceHandlerFromEndpoint instead.
func RegisterImmuServiceHandlerServer(ctx context.Context, mux *runtime.ServeMux, server ImmuServiceServer) error {
dmacvicar added a commit to codenotary/immudb that referenced this issue May 7, 2021
…PC client

Note that the generated gateway code has a comment about this:

// RegisterImmuServiceHandlerServer registers the http handlers for service ImmuService to "mux".
// UnaryRPC     :call ImmuServiceServer directly.
// StreamingRPC :currently unsupported pending grpc/grpc-go#906.
// Note that using this registration option will cause many gRPC library features to stop working. Consider using RegisterImmuServiceHandlerFromEndpoint instead.
func RegisterImmuServiceHandlerServer(ctx context.Context, mux *runtime.ServeMux, server ImmuServiceServer) error {
dmacvicar added a commit to codenotary/immudb that referenced this issue May 8, 2021
…PC client

Note that the generated gateway code has a comment about this:

// RegisterImmuServiceHandlerServer registers the http handlers for service ImmuService to "mux".
// UnaryRPC     :call ImmuServiceServer directly.
// StreamingRPC :currently unsupported pending grpc/grpc-go#906.
// Note that using this registration option will cause many gRPC library features to stop working. Consider using RegisterImmuServiceHandlerFromEndpoint instead.
func RegisterImmuServiceHandlerServer(ctx context.Context, mux *runtime.ServeMux, server ImmuServiceServer) error {
dmacvicar added a commit to codenotary/immudb that referenced this issue May 8, 2021
* feat(pkg/server): embedded REST API / console server

* chore: netgo tag is not required since Go 1.5

https://golang.org/doc/go1.5#net

* feat: embedd webconsole with Go < 1.16

* chore: use go run to run statik

* chore: add test for embedded API/web console server

* chore(pkg/server/webserver): Use ImmuService directly instead of a gRPC client

Note that the generated gateway code has a comment about this:

// RegisterImmuServiceHandlerServer registers the http handlers for service ImmuService to "mux".
// UnaryRPC     :call ImmuServiceServer directly.
// StreamingRPC :currently unsupported pending grpc/grpc-go#906.
// Note that using this registration option will cause many gRPC library features to stop working. Consider using RegisterImmuServiceHandlerFromEndpoint instead.
func RegisterImmuServiceHandlerServer(ctx context.Context, mux *runtime.ServeMux, server ImmuServiceServer) error {

* chore: expose web server settings

* chore: embed from a contained dist directory

* chore: regenerate files

* feat: add TLS support to the webconsole

* chore: fix webserver test

* chore: document webconsole build

* chore: method not longer needed

* chore: add test for webserver options

* chore(pkg/server): increase coverage for Options

* chore(pkg/server): increase coverage for Options
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
P2 Type: Feature New features or improvements in behavior
Projects
None yet
Development

No branches or pull requests