Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When a GRPC server also uses GRPC client when handling request, Metadata is handled incorrectly #1148

Closed
jhump opened this issue Mar 25, 2017 · 3 comments
Assignees

Comments

@jhump
Copy link
Member

jhump commented Mar 25, 2017

If I implement a GRPC server, and the handler method also uses a GRPC client stub, bad things can happen. The are two distinct problems with how metadata is currently handled:

  1. The way values are conditionally encoded/decoded causes downstream errors ("rpc error: code = Internal desc = stream terminated by RST_STREAM with error code: 1").
  2. Since server and client metadata use the same context key, all incoming metadata (to a server action) are automatically propagated to downstream services.

Encoding Errors

The current metadata.MD has strange bimodal behavior.

  1. As a client, it stores encoded values. When creating metadata (via metadata.Pairs), binary headers are properly encoded.
  2. As a server, it stores decoded values. When processing the HTTP/2 header frame, the server calls metadata.DecodeKeyValue. Reading the data means just using the metadata as a map.

So, when using binary headers, having program A invoke method B which in turn invokes method C generates an error in the server hosting method C. The reason is that it ultimately tries to double-decode the header -- which means it's putting inappropriate content on the wire in the header from from method B to C, which subsequently cannot be decoded.

  • A creates the metadata. The values are encoded on creation.
  • GRPC client invoked by A transmits the encoded values.
  • GRPC server for B receives the metadata and decodes the values.
  • GRPC client invoked by B transmits the values, which are no longer encoded.
  • GRPC server for C receives bad headers. Boom.

I have not verified it, but I assume an analogous flow exists for response metadata going the other direction and that flow suffers from the same problem.

Incorrect Propagation

In the above example, of A calling B calling C, I don't actually even want B to send the same metadata to C that it received from A (even if it didn't cause the protocol errors). Metadata are equivalent to HTTP headers. No library I know of automatically propagates incoming HTTP headers to outgoing HTTP requests, and for good reason. A piece of metadata is generally scoped to that single request.

There are some kinds of cross-cutting metadata that often should be propagated -- like tracing context and authentication tokens. But this should be explicit -- like either via a whitelist or by having servers use interceptors that explicitly propagate the relevant metadata.

Solution

One cause of both of these issues is that the API for handling metadata does not distinguish between incoming request metadata (for use by server) and outgoing request metadata. Simply having different APIs (metadata.FromServerContext and metadata.FromClientContext) that use different context keys under the hood would solve both issues.

Another option that would solve both issues would be to change the actual type of metadata.MD so that the map keys aren't just strings but are simple tuples of both the name and a flag (e.g. isServer). This allows code to properly distinguish and handle appropriately, including whether or not to encode/decode for the wire and whether or not to propagate. If it is deemed appropriate to propagate server metadata to client metadata for the server's outgoing requests, the library could at least provide an option to control what is propagated (like a filter/whitelist/blacklist for what metadata names to propagate).

Another change that is orthogonal to the suggestions above would also make the API much more useful (but, alone, would only fix the encoding issue). That change is to leave all data in the metadata map decoded. Instead of having metadata.Pairs encode them, defer that and let the transport encode them when putting them on the wire. This removes the odd bimodal nature. This allows, for example, a client interceptor to easily examine client metadata without having to first decode the values in the map.

@dfawley
Copy link
Member

dfawley commented Mar 27, 2017

Thanks for pointing this out. I'm going to work on having separate keys in the context so that the server doesn't unintentionally forward the metadata from the inbound RPCs into its outbound RPCs. I'm also going to look into fixing the consistency problems with binary data in the metadata without making any substantial API changes.

dfawley added a commit to dfawley/grpc-go that referenced this issue Mar 28, 2017
This will prevent the incoming RPCs' metadata from appearing in outgoing RPCs
unless it is explicitly copied, e.g.:

incomingMD, ok := metadata.FromContext(ctx)
if ok {
  ctx = metadata.NewContext(ctx, incomingMD)
}

Fixes grpc#1148
@jhump
Copy link
Member Author

jhump commented Mar 30, 2017

@dfawley, any ETA for a solution? I'm trying to decide whether to write code to work-around the issue or if I can just wait on a fix.

@dfawley
Copy link
Member

dfawley commented Mar 30, 2017

PR #1157 is for this and is waiting for review. If you'd like you can patch that change and test with it.

dfawley added a commit to dfawley/grpc-go that referenced this issue Apr 6, 2017
This will prevent the incoming RPCs' metadata from appearing in outgoing RPCs
unless it is explicitly copied, e.g.:

incomingMD, ok := metadata.FromContext(ctx)
if ok {
  ctx = metadata.NewContext(ctx, incomingMD)
}

Fixes grpc#1148
dfawley added a commit that referenced this issue Apr 7, 2017
This will prevent the incoming RPCs' metadata from appearing in outgoing RPCs
unless it is explicitly copied, e.g.:

incomingMD, ok := metadata.FromContext(ctx)
if ok {
  ctx = metadata.NewContext(ctx, incomingMD)
}

Fixes #1148
@lock lock bot locked as resolved and limited conversation to collaborators Sep 26, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants