Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

prototype and document how to run multiple replicas of the api server #709

Open
gabemontero opened this issue Feb 13, 2024 · 2 comments
Open
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@gabemontero
Copy link
Contributor

gabemontero commented Feb 13, 2024

Feature request

So by adding some affinity policies to the watcher deployment (I'll create a separate issue/PR for that), my team is on path to properly launching the watcher with multiple replicas for HA purposes, where the knative leader election kicks in correctly.

However, the non-knative, gRPC based api server is a whole different animal.

I started a thread in slack as well over at https://tektoncd.slack.com/archives/C01GCEH0FLK/p1707507954291439 but there has been no interest / commentary from others in the community, so I'm minimally opening this item for reference.

Best as I can tell, gRPC in general can support use of a load balancer per https://github.com/grpc/grpc/blob/master/doc/load-balancing.md , where the balancing happens on a per-call basis, not a per-connection basis.

This feature asks for someone to prototype standing up a gRPC load balancer, verify a multi-replica api server, with proper affinity policies to ensure spread across multiple k8s nodes, handles things correctly, and then document the procedure for others to use.

There is also the notion of defining API servers for different clients, where you define separate services. Examples of that in the doc could be useful.

Use case

  • HA of any system component is a pretty standard requirement for any hosted service.
  • I want external clients to go through the kube rbac proxy, but do not require the watcher's communication with the api server to go through the kube rbac proxy

/label feature

@gabemontero gabemontero added the kind/feature Categorizes issue or PR as related to a new feature. label Feb 13, 2024
@tekton-robot
Copy link

@gabemontero: The label(s) `/label feature

` cannot be applied. These labels are supported: ``

In response to this:

Feature request

So by adding some affinity policies to the watcher deployment (I'll create a separate issue/PR for that), my team is on path to properly launching the watcher with multiple replicas for HA purposes, where the knative leader election kicks in correctly.

However, the non-knative, gRPC based api server is a whole different animal.

I started a thread in slack as well over at https://tektoncd.slack.com/archives/C01GCEH0FLK/p1707507954291439 but there has been no interest / commentary from others in the community, so I'm minimally opening this item for reference.

Best as I can tell, gRPC in general can support use of a load balancer per https://github.com/grpc/grpc/blob/master/doc/load-balancing.md , where the balancing happens on a per-call basis, not a per-connection basis.

This feature asks for someone to prototype standing up a gRPC load balancer, verify a multi-replica api server, with proper affinity policies to ensure spread across multiple k8s nodes, handles things correctly, and then document the procedure for others to use.

Use case

HA of any system component is a pretty standard requirement for any hosted service.

/label feature

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@khrm
Copy link
Contributor

khrm commented May 2, 2024

/kind feature

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests

3 participants