Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
kube-proxy in IPVS mode breaks MetalLB IPs #153
Is this a bug report or a feature request?:
As reported on slack: kube-proxy in IPVS mode needs to add VIPs to a dummy IPVS interface for the routing to work correctly when packets arrive at a machine. It seems that kube-proxy is adding ClusterIPs to the dummy interface, but not load-balancer IPs.
This is very surprising to me, because it effectively means that IPVS mode breaks load-balancing for most cloud providers, and in general is violating the expectations of what kube-proxy does on the node.
I need to set up an IPVS-powered test cluster, and examine the behavior. This might be an upstream bug, it might be a misconfiguration somewhere, or it might be a planned change of direction for kube-proxy that MetalLB needs to keep up with.
Confirmed in my testbed cluster, kube-proxy in IPVS mode does not program the dataplane for LoadBalancer IPs, and apparently also not for externalIPs. This seems like a pretty major feature gap before IPVS mode can go GA. I piled onto the recently opened bug at kubernetes/kubernetes#59976 with more data and a request for resolution.
I've been testing a bit on 1.11 with kube-proxy in IPVS mode and kube-router for networking. This seems to work OK with MetalLB. The kube-routers keep complaining about connections from the upstream firewalls, so there's definitely something I need to get fixed there (probably just make the upstreams passive on the BGP)