Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ignore a service of type=LoadBalancer #685

Closed
deitch opened this issue Jul 31, 2020 · 24 comments · Fixed by #1417
Closed

Ignore a service of type=LoadBalancer #685

deitch opened this issue Jul 31, 2020 · 24 comments · Fixed by #1417
Milestone

Comments

@deitch
Copy link

deitch commented Jul 31, 2020

Is there a way to have metallb ignore a Service of type=LoadBalancer? I couldn't find it in the docs or a brief search in the code.

From a UX perspective, I could see three ways to do this:

  1. Annotation on the service, e.g. metallb.universe.tf/ignore=true or similar
  2. configmap, where I can explicitly list services to ignore
  3. My favourite, configmap, where I can provide annotations that, if set, would cause metallb to ignore the service.

The last is my favourite, as it is is very similar to the standard taints/tolerations. Short of that, I would do service annotation, followed by configmap.

But if it already exists, I will take whatever it has.

@johananl
Copy link
Member

AFAIK there is no k8s equivalent to ingress class for service objects. I think this feature is similar in essence.

It makes sense to me to look into having some way to tell MetalLB to ignore a service (can be opt-in or opt-out). I'm curious about the use case though. Could you share some info regarding why you need this @deitch?

@deitch
Copy link
Author

deitch commented Jul 31, 2020

Could you share some info regarding why you need this @deitch?

Sure. Some services might not need (or want) to be managed by metallb. E.g. I may have a cluster with some services deployed where a LoadBalancer is managed by something else, some where it is managed by metallb.

All told, I probably could get the same effect with service.spec.externalIP, but I haven't tried it yet. That wouldn't help if I wanted something else to manage it, though.

@aledbf
Copy link

aledbf commented Jul 31, 2020

I'm curious about the use case though.

One use case is inlets. We could desire a service type=LoadBalancer in an IP address of the LAN but also with external connectivity using DO.
Right now I use the annotations in a service I don't want to be handled by metallb

    dev.inlets.manage: "true"
    metallb.universe.tf/address-pool: "dummy" #there is no such pool

@deitch
Copy link
Author

deitch commented Jul 31, 2020

That is smart, just adding a dead address pool. I would prefer a proper method, but that would work.

@aledbf
Copy link

aledbf commented Jul 31, 2020

@deitch me too :)

@aledbf
Copy link

aledbf commented Jul 31, 2020

@johananl I can help with this if makes sense. Like a new annotation metallb.universe.tf/manage: "false" to opt-out?

@johananl
Copy link
Member

Thanks a lot @aledbf. I'd like to hear what @rata and @daxmc99 think about this before we decide to implement.

@champtar
Copy link
Contributor

See also #607

@andersmarkendahl
Copy link

Hi!

Some thoughts on the topic.
The first part is mostly focusing on MetalLB controller and I comment on speaker at the end.

From our point of view, such a function would be useful in clusters which is running multiple customers where some customers use MetaLB and some do not.

In such a case, the function would be best implemented as an opt-in solution instead of an opt-out. Imagine CustomerA and CustomerB is deploying in parallel in a single cluster, where CustomerA will rely on MetalLB and CustomerB will solve it using another LB solution. CustomerB does not want to modify its manifest with an annotation to say that it will not use a software that might exist in some clusters. It becomes more tricky if the number of customers not using MetalLB grows.
For this reason it makes more sense to have MetalLB be configured in a way so that users must opt-in to use it.

Taking this into account I would like us to consider the already existing function for addresspools auto-assign. If set to false it means that metallb controller will not assign VIPs to services unless one of the following two things is done by customer (service manifest):

  1. Use the metallb annotation for address pools

  2. Requesting a specific VIP using loadBalancerIP.

Either of the two means that customer is making a decision to opt-in for MetalLB.
Using a metallb annotation obviously, but also picking a specific VIP since if you are sharing a cluster you are assumed to be aware of the IP plan and which IP ranges are yours.

In my mind this solution would suffice to cover this need. I am interested to hearing your thoughts...

I am unsure how the speaker behaves in case where another controller (not MetalLB) assigns a VIP, does it still announce it? (Please comment if you have knowledge on the topic)
If so this could possibly be solved by a change that MetalLB speaker checks the IP ranges in configmap and ignore any VIPs not showing there (it means another sw is managing it).

Best Regards
Anders

@juanrgm
Copy link

juanrgm commented Aug 14, 2020

I am trying run multiple metallb controller on the same cluster but the address pool assignments are out of control, both controllers point to the same service and they are fighting continually by assigning the address.

So i think it would be util a optional service annotation for specify the controller.

metallb.universe.tf/controller-name: metallb-1

@andersmarkendahl
Copy link

andersmarkendahl commented Aug 14, 2020

Hi!

@juanrgm : I do not think multiple metallb controllers in a single Kubernetes cluster is officially supported.
MetalLB is a cluster-wide service in a Kubernetes cluster and is not meant to be running in parallel.

Also, I do think that your question is somewhat another topic, "running multiple MetalLB" much broader subject, likely to have other questions that must be resolved.

Best Regards
Anders

@rata
Copy link
Contributor

rata commented Aug 14, 2020

@juanrgm I think several metallb controllers on the same cluster are really a different issue than this. Can you please open a new issue for that and explain the use case: why you need that, how you propose to address it, etc.?

@juanrgm
Copy link

juanrgm commented Aug 14, 2020

Basically I want that metallb controller don't touch one service (ignore a service like says the issue title) because it will be consumed by other load balancer (casually other metallb).

So metallb controller only will use whose services that they have the same controller-name.

@andersmarkendahl
Copy link

@juanrgm Yes, the use cases are similar, but it becomes problematic if we start to include a problem statement into the discussion that is not a supported use case. As @rata stated, arguments to why multiple MetalLB controllers are needed must be given, but it should not be done within this ticket as it will sidetrack proposals given here.
Sidenote: Regarding multi-tenant MetalLB deployment, there is an open ticket here #383.

@danielkucera
Copy link

I personally would prefer the third option @deitch proposed but I would extend it:
So (each instance of) metallb could be configured using ConfigMap like this:

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    ...
    annotations:
      whitelist:
        - my-annotation: green
      blacklist:
        - other-annotation: red
  • when nothing configured, everything would be as of now - watch for all services.
  • when whitelist configured - watch only services having annotations included in whitelist
  • when blacklist configured - watch the above except those with annotations matching blacklist

@rata
Copy link
Contributor

rata commented Sep 10, 2020

When working on this, we probably want to take into account this KEP that is in the works: kubernetes/enhancements#1960. Quoting from there:

When Service Type=LoadBalancer is enabled by a Kubernetes cloud provider, it is a global
configuration that applies for all Service Type=LoadBalancer resources in a given cluster.
This becomes problematic if users want to leverage multiple Service Type=LoadBalancer
implementations in a given cluster.

@alexellis
Copy link

alexellis commented Oct 16, 2020

@deitch we must stop bumping into each other like this 😄

If it's useful for context, this is how we did it in the inlets-operator

Screenshot 2020-10-16 at 20 49 25

The use-case was for an issue reported where home users use MetalLB to access private services, and inlets-operator to access ones they want to expose publicly, like their IngressController to get TLS certs.

@deitch
Copy link
Author

deitch commented Oct 18, 2020

haha! We do keep bumping in @alexellis , do we not? Always happy to, though. :-)

So you use annotations. I like it, although the specific annotations tie them in more tightly than I particularly like.

Either way, looking for some help from the metallb crew here.

@displague
Copy link

https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#servicespec-v1-core

loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.

Does loadBalancerClass provide the feature needed here, in a standard way (granted, MetalLB may need to be updated to adopt this behavior)?

https://github.com/metallb/metallb/search?q=loadBalancerClass

@deitch
Copy link
Author

deitch commented Jun 21, 2021

@displague is loadBalancerClass a new addition from 1.21? I couldn't find it in 1.20.

metallb would need to be updated to support it, which wouldn't help k8s versions <1.21 (if I understood the above correctly).

A simpler solution is just not to call clearServiceState() every time something doesn't fit within metallb's algorithm. I agree that anything metallb manages should have that, but it should not assume that everything is managed by it.

@johananl
Copy link
Member

johananl commented Jul 13, 2021

If loadBalancerClass solves the problem, my tendency is to go with that solution since it is a k8s standard. Since loadBalancerClass was introduced in k8s v1.21, this solution would indeed not work with older k8s versions. As long as we aren't breaking older versions by implementing support for loadBalancerClass (which I don't think should happen), I'd go with this solution rather than add a MetalLB-specific solution based on annotations/labels.

@deitch WDYT?

@andrewsykim
Copy link

For the record, v1.21 includes loadBalancerClass but the feature is alpha so it's not enabled by default. Starting in v1.22 the feature gate is on by default so the field will be more widely available kubernetes/kubernetes#103129

If loadBalancerClass solves the problem, my tendency is to go with that solution since it is a k8s standard. Since loadBalancerClass was introduced in k8s v1.21, this solution would indeed not work with older k8s versions. As long as we aren't breaking older versions by implementing support for loadBalancerClass (which I don't think should happen), I'd go with this solution rather than add a MetalLB-specific solution based on annotations/labels.

I generally agree with this, at worse any older version should ignore or drop the field.

@deitch
Copy link
Author

deitch commented Jul 14, 2021

As @andrewsykim said, it only is Alpha now, so going to be a bit of time. And I do dislike saying very popular barely older versions won't work. On the other hand, I get the idea of using something k8s standard.

If we are going to go down the (sane) standard path, is there anything that can work for older k8s versions?

@andrewsykim
Copy link

I agree that it's not ideal to ignore those older versions. One possible approach is to support both an annotation and the official field, where the field always takes precedence over the annotation. And gradually phase out use of annotation over time as more folks use newer clusters.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.