Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplify Library Structure for Kubernetes #1642

Merged
merged 2 commits into from
May 9, 2022

Conversation

dpbevin
Copy link
Contributor

@dpbevin dpbevin commented Apr 8, 2022

Addresses the simplicity aspect of #1640 with two commits:

  • First commit combines the ingress controller into a single deployable
  • Second commit moves the relevant pieces of the "Operator Framework" into the Kubernetes library

I'll update the namespaces in the OperatorFramework folder of the Yarp.Kubernetes.Controller module as a subsequent PR.

This directory contains a sample ingress as well as the definition for the Kubernetes manifests for the ingress and ingress controller.
This directory contains a sample ingress as well as the definition for the Kubernetes manifests for the ingress controller.

The sample ingress controller is a single deployable (previously this was two separate deployables).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we keep both modes?

  • Your suggested approach of having the ingress & controller live in a single process
  • The existing model where you have a controller and possibly multiple ingress instances talking to it instead of the API server

We're all in favour of collapsing unnecessary projects down to simplify the code/structure, but we don't want to throw out the option of hosting the controller standalone.

How complicated do you think it would be to have a single deployable (e.g. single NuGet) as you suggest, but it having the option to act as a standalone ingress, standalone controller or 2-in-1? That is, have an option to specify 1) where to get the config and 2) should it act as a service to serve that config to other ingresses.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's possible but I still believe we will run into problems when we introduce TLS termination, specifically when it comes to providing access to Kubernetes Secrets. With the "split" design, this would involve sending TLS private/public keys over the API that currently exists (between the ingress and the controller) and given there is no authentication mechanism, this would open up a significant security hole.

My other concern with the "split" approach is that there is no redundancy in the "controller instance" as I believe an ingress instance can only connect to a single "controller instance".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MihaZupan I've just pushed a change that:

  • Still provides a single project with all K8S logic
  • Reinstates the split deployable
  • Adds a new sample with a combined (single) deployable

Hopefully this covers both bases but I still think a single deployable is the right way to go long term for the reasons I've mentioned.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks David, really appreciate the great answers about the benefits of a combined approach.

I'd say we should get this change in as-is, still enabling a split deployment. We can revisit whether that still has merit as a separate discussion. FWIW I'm not so sure anymore after your arguments :)

@ghost
Copy link

ghost commented Apr 21, 2022

We are currently running the single ingress-controller with multiple ingress instances scenario in production. I would like to know more to understand how combining them would impact this approach. Is the idea that each ingress would monitor the k8s cluster and update their own configs accordingly?

@dpbevin
Copy link
Contributor Author

dpbevin commented Apr 21, 2022

@archonandrewhunt just to clarify your scenario, you are running multiple replicas of ingress instances for "load balancing", correct? Could you also give me a ballpark idea of the number of ingress instances and the number of ingress/service resources please?

So the immediate positive impact of this change is that you wouldn't have a single point of failure anymore (i.e. the "controller instance").

It is true that each ingress would then listen for k8s ingress/service events independently and maintain their own in-memory cache. So from this respect, there would be a slightly higher load on the kubeapi server - so understanding how many instances you would run would be good to know (perhaps I could simulate that myself).

The combined approach is the same approach to many other ingress controllers available and while there have been some issues with really big systems, other solutions have been developed over the years depending on the needs.
One approach is to utilize IngressClass annotations to Ingress sources so that subsets of ingresses are managed by multiple deployments (not just multiple replicas). This is already supported by Yarp.
Another approach is to apply Field Selectors when subscribing for k8s event updates from the kubeapi server. For example, one deployment of the ingress monitors namespaces A-K, while another monitors namespaces L-M. While Yarp doesn't support this today, adding it would be relatively straightforward and it's something that I'd already considering doing myself.

@dpbevin
Copy link
Contributor Author

dpbevin commented Apr 21, 2022

Fixed merge conflict from #1664

@ghost
Copy link

ghost commented Apr 22, 2022

@dpbevin Thanks for getting back to me. We're pretty running a fairly small deployment right now with 4 ingress replicas and ~30 services/ingresses, so based on what you are describing it sounds like combining the controller into the ingress would be fine for us, and probably even preferable as you mentioned to avoid that single point of failure.

Appreciate the info

samples/KuberenetesIngress.Sample/Combined/README.md Outdated Show resolved Hide resolved
src/Kubernetes.Controller/Protocol/IDispatcher.cs Outdated Show resolved Hide resolved
src/Kubernetes.Controller/ConfigProvider/IUpdateConfig.cs Outdated Show resolved Hide resolved
samples/KuberenetesIngress.Sample/Combined/README.md Outdated Show resolved Hide resolved
samples/KuberenetesIngress.Sample/Ingress/README.md Outdated Show resolved Hide resolved
@dpbevin
Copy link
Contributor Author

dpbevin commented May 6, 2022

Rebased to pull in latest changes from main branch and addressed feedback.
Let me know if you want me to squash my commits.

@MihaZupan
Copy link
Member

Thanks.

Let me know if you want me to squash my commits.

No need, we squash merge PRs.

@MihaZupan MihaZupan merged commit 82a831e into microsoft:main May 9, 2022
@dpbevin dpbevin deleted the dev/bevo/collapse-libs branch May 10, 2022 08:32
@MihaZupan MihaZupan added this to the YARP 2.0.0 milestone Aug 19, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants