-
-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Controller for Service rather than Ingress #1
Conversation
No longer needs an Ingress for it to work
Controller for Service rather than Ingress
@adyanth Very interesting! So does it mean that if we have 10 nodes, Basically, how is traffic load balanced? If it's not, you maybe want to have a look at https://github.com/k3s-io/klipper-lb, that could help. And any plans to add back support for ingress controllers like Traefik, as these allow things like basic auth? Last but not least, what about Cloudflare LBs? It would be awesome to support them! |
Hey @mysticaltech , for upstream load balancing, you can set the replica count for cloudflared and the specified number would be deployed. Cloudflare will load balance equally over the available connections. There is no way to control this load balancing (weighting/failover) without Cloudflare LBs, which I could not test out as it is not a free product :( For downstream, when you add an annotation to a service, cloudflared connects to the service, which means Kubernetes handles load balancing to any available pod that can serve the traffic. Since the cloudflared is inside the cluster, there is no need for a LoadBalancer service (which is needed to access the service from outside). For ingress support, check this: #28. You can overwrite where cloudflared points to the ingress using As I mentioned before, Cloudflare LBs are not a free product. This means 1. I don't have access to it without paying and 2. people using this open source project also need to pay for it. I would much rather have @cloudflare drive that effort ;) |
Ah, perfect, sounds good! Good to see the service annotations. About Cloudflare LBs, which would be helpful for failover (important in production if a node goes down), is free now (up to 500k visits). See https://support.cloudflare.com/hc/en-us/articles/115005254367-Billing-for-Cloudflare-Load-Balancing |
I understand, @adyanth. However, supporting those would make your project very powerful. I believe it's a missing piece. Maybe set up Github sponsors? I would be ready to give $5 for three months or even indefinitely if I use your project. You have done very important work here because many people would love to use Cloudflare with their clusters. There's also the whole GEO + caching aspect that could be interesting to develop. Also, I'm thinking of adding out-of-the-box support for your project in https://github.com/kube-hetzner/terraform-hcloud-kube-hetzner. But if you know and love Hetzner, please do try this project, and do not hesitate to submit PRs. |
First of all, thanks for the support! GitHub sponsors is sadly not available in my country yet (I've applied for and in the waitlist) and opencollective needs a minimum of 100 stars to setup. Adding support for Cloudflare LBs is definitely not something difficult, since their APIs are excellent. I just need to work with it for a couple days/weeks to get familiar, and get an architecture for it. Do we add LBs per service or per domain, I am not sure how it works. Please open an issue for supporting LBs, I will get around to it when possible. And yes, I've heard many good things about Hetzner! But again, my current use cases does not justify any subscription costs. I like hosting it all in my tiny little servers at home, and use a cloud instance only for monitoring. |
Very good to hear, @adyanth! Please continue the active dev, tweak the docs, and I'm sure you'll get to 1000 stars in no time, not only 100. But it's important to code everything so that it could scale indefinitely, as in support of massive clusters receiving lots of traffic. There should not be any potential bottleneck. I think it's already great, but I had to mention this, as it's one of my honest concerns. About the per service or per domain, I honestly do not know yet, as I have not used a Cloudflare LB to this day. The way I see it, you would inspire yourself from how klipper-lb does it, meaning you create an LB service in Kubernetes and will, in turn, talk to the Ingress controller. And there would be one Cloudflare LB created for each Kube service of type LB. Ideally, the same way that Cloudflare themselves did it in cloudflare-ingress-controller (too bad they stopped dev). But without the ingress controller part, as Traefik, for instance, can do this much better. That is precisely what Klipper-lb does; the way I see it, it creates a Traefik compatible LB service based on the schedulable nodes. Ideally, it would also create a Cloudflare LB and connect to it via tunnels. That's really where your project can shine, IMHO! When I say shine - I mean it; it could become huge! |
This is a view of the Klipper pods and generated svc lb if you are curious. If you can do the same and connect and/or create/configure a Cloudflare LB to the internal LB svc, and do this over tunnels. It would be mind bending-ly fantastic. Because the would not be any custom usage, people would treat Cloudflare LB like a local cloud LB, basically automatically configured and managed, all via the Ingress definition. |
I believe that it's very important to not focus on custom flows but to reuse the well-beaten paths as much as possible! As folks' brain neural networks are already configured to work with ingress definitions :) Even though it's possible to use services directly as you do, it's best to avoid that, and instead, stick to the well-beaten path. This will also ensure the complete interoperability of your project with so many others. Basically, if you can pull this off, you will be giving a proper Kubernetes cloud LB to any at-home or edge Kube cluster that wants it. With all the benefits that Cloudflare has! If you play your cards right, I am sure either Cloudflare will employ you or end up buying or financing your project. I have no doubt in my mind about that. |
All good ideas! Yeah, the whole premise of this working was indeed magical in my mind when I started building this thing out. I have another such mind-blowing idea as a pinned issue :) (#43) Regarding your point on how Cloudflare LB vs klipper-lb (or any LoadBalancer controller) on K8s works, the flow of traffic is actually reversed. When you create a Service of type LoadBalancer, you can then access the service from outside the cluster with Load Balancing. Ingress controllers like Traefik usually use LoadBalancer Services themselves to expose their ports. Cloudflare LBs (from my knowledge of it) distribute traffic to multiple tunnel IDs. All instances on the same tunnel ID are always load balanced equally. Let's say you have tunnel1 with 2 instances and tunnel2 with 3, you can now use Cloudflare LBs to point/distribute/failover to any of the running cloudflared instances with weights as needed. You can make it so that tunnel1 is active when tunnel2 is on standby, and this can be because tunnel2 is located in a remote data center or so as per the architecture. Now comes the question of where this sits. Usually, in K8s, an Ingress Controller, with a LoadBalancer service controller is how HTTP traffic gets in. Cloudflare Tunnels are another parallel way HTTP traffic can get in. Two options are, it can be placed
If you see the flows, Cloudflare LBs are not a part of it at all. It sits at the edge, to determine which tunnel and cloudflared instance will serve the traffic. This means it decides from where the traffic enters the cluster only. Any load balancing after that is handled by standard K8s flows. You can implement Cloudflare LBs today with this operator with a slight tweak. You would need a second headless service that adds an annotation to the second tunnel CR that you create. Now that you have two tunnels, you can create a load balancer on Cloudflare and configure it to balance across the two tunnels. Note that this does not have anything to do with the K8s LBs and that when the nodes go down in production, as long as K8s correctly schedule the cloudflared pods, or if you have more replicas, your application will not go down, since even with one tunnel, you have an equal distribution where traffic is not sent to non-running cloudflared pods. Here is what the flow looks like, everything included. Replicas are shown where present: |
It makes total sense; thank you for this deep dive explanation! 🙏 I want to propose to you to create a dummy service load balancer (as Klipper does) that displays dummy IPs and allows ingress controllers to connect to it and listen to the world via tunnels and at the end of which Cloudflare LBs. So unless I misunderstood something, it fits what you explained above, minus the dummy service LB. I would like that because, as said before, the whole developer experience would be very familiar for the average Kube user, and it would completely abstract away tunnels. |
It's either my misunderstanding or yours since I still do not understand your request, sorry! How would you propose the traffic flow be? Internet -> Cloudflare LB -> cloudflared -> Ingress? This is possible via annotations on the service today, but you mentioned using an ingress controller for it right? Or are you saying, when you create a Load Balancer service, it automatically creates the Cloudflare LB too? That can be done, but I still do not understand what you mean by Ingress controllers connecting to the dummy service load balancer. Can you give me a pictorial representation of what you would like it to look like, please? |
Sorry, I may not have been extra clear, and perhaps the notions are not as crystal clear in my head too, but you basically got it right. Indeed am thrilled that this flow "Internet -> Cloudflare LB -> cloudflared -> Ingress" can be done via annotation today, but it's not that practical IMHO, as it requires a learning curve. However, what if we had this flow "Internet -> Cloudflare LB -> cloudflared -> LB service (dummy, as in pipe everything to cloudflared, no outside exposure) -> Ingress" then it would work without annotation, and indeed, requesting a service LB would automatically create a Clouflare LB and do all the piping. Is that clearer? |
"Internet -> Cloudflare LB -> cloudflared -> LB service (dummy, as in pipe everything to cloudflared, no outside exposure) -> Ingress" That LB service you talk about was my exact point of confusion, how would the traffic flow from the dummy LB service to the Ingress? Those are Kubernetes objects, for which we cannot control the flow of traffic (for an LB service, we can by writing an LB controller, but I am not able to visualise that flow since MetalLB/Klipper-LB traffic flow is from an IP to the service to the pods, but here we have service to Ingress). The only service I can think of that comes close to what you are saying is an ExternalName service. Also, the second point of requesting LB automatically does not make sense to me. If we create a Cloudflare LB, what would be the balancing set? How many tunnels to create, or will it just point to one tunnel? And let's say you create a Service of type LB called x which creates a Cloudflare LB for one tunnel that you specify, and then directs all traffic to the service which talks to the application. Now, you cannot point it to the Ingress, how will that work? A service cannot point back to the ingress, without funky steps. Sorry, but the flow is still a bit confusing to me. Regarding the first part, why would you say it is not practical and needs a learning curve? If you have worked with cloudflared's config.yaml, it allows you to specify a hostname and the service to proxy traffic to. You can set both of them manually using annotations shown here, or if you leave it empty, the defaults work well. To point it to the ingress, you override the target to the ingress rather than the service. Here is an example:
I would say cloudflared sits in the place of an Ingress, and not a Service, but the reason for me writing a Service Controller rather than an ingress as you can see in this PR is due to the fact that cloudflared does not support all of Ingress's features like path-based routing etc. And to reduce the confusion with annotations, this move to CRDs would make it look like how Role bindings work in Kubernetes today. You have a Service, you have a Tunnel, and you will create a TunnelBinding to the Service so that the Tunnel serves the traffic to the Service or an Ingress based on the TunnelBinding config. Will that help? |
I understand, thank you! The CRD seems awesome and would simplify things for sure. But still back to the idea above, you ask, "how would the traffic flow from the service LB to the ingress?", My answer is I do not know, but Klipper LB built by the Rancher team, is able to do it (source code and screenshots above). They somehow interface with Traefik to make this happen. And to answer you second question, "how would we configure the Cloudflare LB", basically the same way as the Cloudflare ingress controller did, also source code linked. Basically, I really think that the CRDs will make all off this possible, but that if you can do the above two proposal by looking at the two projects linked above, then your project would fit the usual paradigm and would have the potential to reach the stratosphere! 🚀 Please do consider this deeply, no need for an immediate answer 🙏 |
💡 ⚡ Ahhh, you refer to the Traefik's (i.e, the Ingress Controller's) own service and not the application's, which Klipper-lb handles when using K3s. That is a service created by Traefik to get all traffic into the cluster. Annotating that service will redirect all traffic to the Ingress controller, is brilliant! I need to experiment with this, along with the recent support for wildcard proxies and tunnels, which might be a perfect implementation. Combine this with the CRD to directly link it after the fact, it will be seamless! Also, I was not aware that the Cloudflare ingress controller had support for LBs, they seem to do it by tagging the ingress with the name of the LB. That is indeed a clean way to implement it, I will look into it. Thank you for taking the time to explain that to me, that was indeed something I did not think of doing. And thanks for all your support! |
Wonderful! Very glad to see that there is hope of going in that direction. Will follow the project closely and try to spread the word about this repo. Thanks for your patience and explanations too! 🙏 |
I also would love to see this working. But this is still in progress right? When do you think it will implemented and ready to use? |
The controller now watches and operates on a Service rather than an Ingress.
Ingress has more requirements, and an Ingress controller is not necessary for the Cloudflare Tunnel to work.
This change works at a level lower than before, so the ingress is free to be used with External-DNS, etc. without interfering with Cloudflare Tunnel operations