-
Notifications
You must be signed in to change notification settings - Fork 838
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simplify Library Structure for Kubernetes #1642
Conversation
c86ff8d
to
6b17e10
Compare
6b17e10
to
46a3745
Compare
This directory contains a sample ingress as well as the definition for the Kubernetes manifests for the ingress and ingress controller. | ||
This directory contains a sample ingress as well as the definition for the Kubernetes manifests for the ingress controller. | ||
|
||
The sample ingress controller is a single deployable (previously this was two separate deployables). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we keep both modes?
- Your suggested approach of having the ingress & controller live in a single process
- The existing model where you have a controller and possibly multiple ingress instances talking to it instead of the API server
We're all in favour of collapsing unnecessary projects down to simplify the code/structure, but we don't want to throw out the option of hosting the controller standalone.
How complicated do you think it would be to have a single deployable (e.g. single NuGet) as you suggest, but it having the option to act as a standalone ingress, standalone controller or 2-in-1? That is, have an option to specify 1) where to get the config and 2) should it act as a service to serve that config to other ingresses.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's possible but I still believe we will run into problems when we introduce TLS termination, specifically when it comes to providing access to Kubernetes Secrets. With the "split" design, this would involve sending TLS private/public keys over the API that currently exists (between the ingress and the controller) and given there is no authentication mechanism, this would open up a significant security hole.
My other concern with the "split" approach is that there is no redundancy in the "controller instance" as I believe an ingress instance can only connect to a single "controller instance".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@MihaZupan I've just pushed a change that:
- Still provides a single project with all K8S logic
- Reinstates the split deployable
- Adds a new sample with a combined (single) deployable
Hopefully this covers both bases but I still think a single deployable is the right way to go long term for the reasons I've mentioned.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks David, really appreciate the great answers about the benefits of a combined approach.
I'd say we should get this change in as-is, still enabling a split deployment. We can revisit whether that still has merit as a separate discussion. FWIW I'm not so sure anymore after your arguments :)
We are currently running the single ingress-controller with multiple ingress instances scenario in production. I would like to know more to understand how combining them would impact this approach. Is the idea that each ingress would monitor the k8s cluster and update their own configs accordingly? |
@archonandrewhunt just to clarify your scenario, you are running multiple replicas of ingress instances for "load balancing", correct? Could you also give me a ballpark idea of the number of ingress instances and the number of ingress/service resources please? So the immediate positive impact of this change is that you wouldn't have a single point of failure anymore (i.e. the "controller instance"). It is true that each ingress would then listen for k8s ingress/service events independently and maintain their own in-memory cache. So from this respect, there would be a slightly higher load on the kubeapi server - so understanding how many instances you would run would be good to know (perhaps I could simulate that myself). The combined approach is the same approach to many other ingress controllers available and while there have been some issues with really big systems, other solutions have been developed over the years depending on the needs. |
46a3745
to
2493a7c
Compare
Fixed merge conflict from #1664 |
@dpbevin Thanks for getting back to me. We're pretty running a fairly small deployment right now with 4 ingress replicas and ~30 services/ingresses, so based on what you are describing it sounds like combining the controller into the ingress would be fine for us, and probably even preferable as you mentioned to avoid that single point of failure. Appreciate the info |
2493a7c
to
e4e9930
Compare
…ployable microsoft#1640 Simplify packaging for Kubernetes
e4e9930
to
8e89825
Compare
Rebased to pull in latest changes from |
Thanks.
No need, we squash merge PRs. |
Addresses the simplicity aspect of #1640 with two commits:
I'll update the namespaces in the
OperatorFramework
folder of theYarp.Kubernetes.Controller
module as a subsequent PR.