Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Assign more than one IP #439

Closed
arman-sydikov opened this issue Jun 9, 2019 · 5 comments
Closed

Assign more than one IP #439

arman-sydikov opened this issue Jun 9, 2019 · 5 comments
Labels
flexibility Issues related to flexible combinations of pools, services and nodes. protocol/layer2

Comments

@arman-sydikov
Copy link

arman-sydikov commented Jun 9, 2019

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.1.240-192.168.1.250

When we apply Layer 2 configuration it assigns exactly one IP from the address pool to the service.
Is it possible to assign more than one IP to the service ?
If yes, how to do it ?


  • Version of MetalLB: v0.7.3
  • Version of Kubernetes: v1.14.2
  • Name and version of network addon (e.g. Calico, Weave...): flannel:v0.11.0-amd64
@NicolasT
Copy link

NicolasT commented Jul 3, 2019

+1. My scenario: use MetalLB to assign VIP(s) to a service, then have the MetalLB controller apply some kind of (soft) anti-affinity to distribute those per-service VIPs across multiple speakers, and set up DNS to round-robin between or multi-A answer those VIPs upon resolve requests.

I believe this could be achieved by creating multiple Service objects of type LoadBalancer with the same selector (which results in multiple VIPs assigned for the same backing service instances), but it wouldn't imply the (soft) anti-affinity between speakers so not necessarily increase availability or load balancing across multiple front-end nodes.

Does this make sense?

@ffly90
Copy link

ffly90 commented Mar 4, 2022

Another solution would be, that Metallb implements the possibility of activating "static" IPs. The IPs would be started on one of the interfaces in the cluster even thought Metallb is not aware of a service that consumes it. The service can then activate externalIPs and make use of metallbs fail-over capabilities.

This could really make the life easier if you want to distribute your applications over multiple IPs but still have the ingress controller handle tls termination.

@migs35323
Copy link

+1

@fedepaol
Copy link
Member

fedepaol commented Oct 7, 2022

The solution to this issue is to have multiple services sharing the same set of endpoints.
Doing this is functionally equivalent to having a service with the two ips.
The anti affinity mentioned by @NicolasT would require to have the leader election algorithm to be stateful as opposed to the current implementation, and I am not sure it's a complication we want to introduce.
Also, now we have the node selector feature that would allow to split the set of nodes the two (or more) services would be advertised from, which would help a bit in the direction of having a better balancing.

Closing this issue.

@fedepaol fedepaol closed this as completed Oct 7, 2022
@kvaps
Copy link
Contributor

kvaps commented Dec 4, 2023

Hi, I'd like to mention that MetalLB currently uses a consistent hashing algorithm. It calculates the hash from nodeName + serviceIP for each endpoint, sorts the results, and selects the first one. In simpler terms, when you have multiple services with the same endpoints but different IP addresses, MetalLB will choose different nodes to announce those IPs. Here are more details:

https://github.com/fedepaol/metallb/blob/3df9cbd68b9d24edeaba6d84f30fcf3b446a7289/speaker/layer2_controller.go#L116-L126

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
flexibility Issues related to flexible combinations of pools, services and nodes. protocol/layer2
Projects
None yet
Development

No branches or pull requests

7 participants