-
Notifications
You must be signed in to change notification settings - Fork 903
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
layer2 only announces when speaker nodes overlap with service pod nodes #322
Comments
I found a way to make it work: as long as the nginx-ingress service pod is scheduled on one of the subnet=192.168.1.0 nodes, metallb announces correctly. If the service pod is scheduled on one of the other nodes (not running metallb-speaker), then no metallb-speaker announces the 192.168.1.50 address and external routing fails to work. I think this problem is caused by the shouldAnnounce logic, which doesn't do any announcing if the endpoint is not on one of the nodes that is running metallb-speaker: https://github.com/google/metallb/blob/master/speaker/layer2_controller.go#L73 |
L2 mode doesn't work correctly if you have a cluster with multiple subnets. This is a hard limitation of the network protocols, nothing I can fix. If your cluster is large enough to have multiple subnets, you will need to use BGP mode to distribute IPs. |
This issue shouldn't be closed. I cannot have metallb on all of my nodes, so I only run it on a selection. Any workload that isn't running on one of the metallb-nodes won't be announced by metallb. |
Hello @thies226j. As @danderson said, if your requirement is to announce a service from a node which doesn't run a MetalLB speaker, you can achieve that in BGP mode with |
Is this a bug report or a feature request?:
Bug report.
What happened:
I have a single nginx-ingress, which metallb has correctly bound to 192.168.1.50, and is scheduled on pve1:
I can connect just fine to this service from any machine within the cluster:
However, I can't connect from outside the cluster (on my local workstation):
What you expected to happen:
The nc from my local workstation succeeds, just the same as within the cluster.
How to reproduce it (as minimally and precisely as possible):
Use Kubernetes 1.12.0 and metallb 0.7.3 in layer2 mode. Have a service pod run only on nodes that do not have a metallb-speaker pod scheduled.
Anything else we need to know?:
I'm using metallb in layer2 mode, and have labelled two of my three workers that exist on the same subnet (the others are in a different subnet) with:
Then, set the nodeSelector on the metallb-speaker daemonset.
This configuration worked fine with Kubernetes 1.10 and metallb 0.6.2.
When I run "tcpdump -ni vmbr0 host 192.168.1.50" on the two nodes with the metallb-speaker running, I find they both see ARP requests when I attempt the nc from my workstation:
However, neither of the hosts respond, even though I see that responders are created in the logs:
Environment:
uname -a
): Linux pve1 4.15.18-5-pve Implement BGP add-path #1 SMP PVE 4.15.18-24 (Thu, 13 Sep 2018 09:15:10 +0200) x86_64 GNU/LinuxThe text was updated successfully, but these errors were encountered: