New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add the ability to specify metadata #23
Conversation
Thanks for your pull request. It looks like this may be your first contribution to a Google open source project (if not, look below for help). Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please visit https://cla.developers.google.com/ to sign. Once you've signed (or fixed any issues), please reply here (e.g. What to do if you already signed the CLAIndividual signers
Corporate signers
ℹ️ Googlers: Go here for more info. |
I signed it! |
CLAs look good, thanks! ℹ️ Googlers: Go here for more info. |
Can you elaborate your use case further? What do you use metadata for in this case? |
It is for a proxy service, where we will run the health check request through the proxy to ensure service health. The service needs metadata for routing requests. |
So let me rephrase what I understood and you tell me if I got it right: You have multiple gRPC server instances behind a load balancer (what you referred as "proxy") and you use metadata to tell the proxy to route the request to a particular server instance. If I got it right, this sounds like an anti-pattern as you shouldn't be going through a proxy/LB to check health. You should be directly target the instances. For that reason, I'd love to know what sort of environment you're running on (VMs, Kubernetes, something else?) as this is probably not the right way to do health checks on that platform. Again sorry for digging deep, but for any feature we add, we need to make sure there's a valid use case that we can get behind as a best practice. |
We are running on K8s. The requests are not going through an LB. The health check is run per pod (service instance).
The health service will always be returning And the metadata is needed to control/modify the behavior of the proxy service. Hope this clarifies the scenario. |
Hi @ahmetb, any other concerns ? Don't you think metadata is generic enough that could be used for different scenarios ? |
Can you give some concrete examples? I am still unclear. Why can’t your Kubernetes health check directly hit the grpc service port it’s running on inside the Pod? I still don’t get what this proxy instance is and how it comes to play. |
Maybe I wasn't clear enough. The service we are developing and trying to health check in this scenario is a proxy service. And the way we are checking the service health is running a request to the health check service though our service (the proxy). |
Sorry, it's still not clear enough. Any photos of drawings, or diagrams are appreciated. What is proxy service? Is it a gRPC service itself, or is it simply a http/2 proxy? Why are you simply not using a liveness probe on the proxyservice pod? Does it forward the request to another grpc service because the pod is a proxy itself? I still don't get your situation fully. |
It is a GRPC proxy deployed as a sidecar as shown in the diagram above. The GRPC proxy forwards requests to other GRPC services based on metadata. And to test if that the proxy is alive we will simply forward a request to the health check service (which the proxy service is implementing) to make sure it is alive. The health probe command will be similar to:
And since the proxy needs additional metadata other than the forwarding address the http_proxy option is not enough in our case. |
Closing due to inactivity. |
We have a use case where we need to send custom metadata to the server for the health check. This change enables specifying metadata on the command line.