-
Notifications
You must be signed in to change notification settings - Fork 1k
Description
MetalLB Version
0.13.10
Deployment method
Charts
Main CNI
calico
Kubernetes Version
1.24.11
Cluster Distribution
k0s
Describe the bug
Enabling the exclusion config for L2 interfaces in the chart may exclude interfaces which should not be excluded. Therefore no ARP responder is started for these interfaces.
To Reproduce
- Create Kubernetes Node which has its primary network interface (for metallb) named
workload. - Install HELM Chart with exclusion config enabled.
Expected Behavior
The interface workload should not be excluded.
Additional Context
This effect is caused by the configuration itself:
metallb/charts/metallb/templates/exclude-l2-config.yaml
Lines 8 to 22 in 04abf80
| excludel2.yaml: | | |
| announcedInterfacesToExclude: | |
| - docker.* | |
| - cbr.* | |
| - dummy.* | |
| - virbr.* | |
| - lxcbr.* | |
| - veth.* | |
| - lo | |
| - ^cali.* | |
| - ^tunl.* | |
| - flannel.* | |
| - kube-ipvs.* | |
| - cni.* | |
| - ^nodelocaldns.* |
All entries seems to be interpreted as regular expressions. A few entries use the correct notation ^... and most of the entries use wildcards like .*.
Unfortunately, this is not the case for all entries. The entry lo does not only match the interface name lo (expression should be ^lo$) but also workLOad.
The configmap should be updated with correct regular expression. Entries like docker.* or veth.* should also be changed to ^docker.* and ^veth.* and so on.
Otherwise the matching will not be intuitive.
I've read and agree with the following
- I've checked all open and closed issues and my request is not there.
- I've checked all open and closed pull requests and my request is not there.
I've read and agree with the following
- I've checked all open and closed issues and my issue is not there.
- This bug is reproducible when deploying MetalLB from the main branch
- I have read the troubleshooting guide and I am still not able to make it work
- I checked the logs and MetalLB is not discarding the configuration as not valid
- I enabled the debug logs, collected the information required from the cluster using the collect script and will attach them to the issue
- I will provide the definition of my service and the related endpoint slices and attach them to this issue