New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kube-proxy in IPVS mode breaks MetalLB IPs #153

Open
danderson opened this Issue Jan 26, 2018 · 10 comments

Comments

Projects
None yet
5 participants
@danderson
Member

danderson commented Jan 26, 2018

Is this a bug report or a feature request?:

Bug.

What happened:

As reported on slack: kube-proxy in IPVS mode needs to add VIPs to a dummy IPVS interface for the routing to work correctly when packets arrive at a machine. It seems that kube-proxy is adding ClusterIPs to the dummy interface, but not load-balancer IPs.

This is very surprising to me, because it effectively means that IPVS mode breaks load-balancing for most cloud providers, and in general is violating the expectations of what kube-proxy does on the node.

I need to set up an IPVS-powered test cluster, and examine the behavior. This might be an upstream bug, it might be a misconfiguration somewhere, or it might be a planned change of direction for kube-proxy that MetalLB needs to keep up with.

@danderson

This comment has been minimized.

Member

danderson commented Feb 17, 2018

Confirmed in my testbed cluster, kube-proxy in IPVS mode does not program the dataplane for LoadBalancer IPs, and apparently also not for externalIPs. This seems like a pretty major feature gap before IPVS mode can go GA. I piled onto the recently opened bug at kubernetes/kubernetes#59976 with more data and a request for resolution.

@danderson danderson removed their assignment Apr 1, 2018

@danderson

This comment has been minimized.

Member

danderson commented May 2, 2018

Allegedly, this is fixed in the latest 1.11 nightly builds of kube-proxy. I need to verify that.

@pgagnon

This comment has been minimized.

pgagnon commented Jul 2, 2018

@danderson

Did you ever get around to testing IPVS mode in 1.11? I would love to know if it works or not.

Thanks!

@bjornryden

This comment has been minimized.

bjornryden commented Jul 4, 2018

I've been testing a bit on 1.11 with kube-proxy in IPVS mode and kube-router for networking. This seems to work OK with MetalLB. The kube-routers keep complaining about connections from the upstream firewalls, so there's definitely something I need to get fixed there (probably just make the upstreams passive on the BGP)

@kvaps

This comment has been minimized.

Contributor

kvaps commented Oct 31, 2018

Here is upstream bugs:
Kube-proxy: kubernetes/kubernetes#59976
Kube-router: cloudnativelabs/kube-router#561

@kvaps

This comment has been minimized.

Contributor

kvaps commented Nov 2, 2018

Probably fixed by kubernetes/kubernetes#70530
After this settings MetalLB is working fine with kube-proxy in ipvs-mode.
But I've tested only L2-mode.
Kubernetes 1.12.1

@kvaps

This comment has been minimized.

Contributor

kvaps commented Nov 22, 2018

Fixed in kuber-router too
cloudnativelabs/kube-router#580

@m1093782566

This comment has been minimized.

m1093782566 commented Nov 23, 2018

Can we close this issue now?

@kvaps

This comment has been minimized.

Contributor

kvaps commented Nov 28, 2018

@m1093782566, I'm not sure about BGP-mode, but for L2 problem totally solved.

@m1093782566

This comment has been minimized.

m1093782566 commented Nov 28, 2018

Thanks for confirm.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment