-
Notifications
You must be signed in to change notification settings - Fork 39.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Interface kube-ipvs0 NOARP and DOWN #107662
Comments
@ilpozzd: This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/sig network |
/sig node |
/area ipvs |
Do you enable the strictARP? In layer2 mode and IPVS, we should enable strictARP in config of kube-proxy, refer to https://metallb.universe.tf/installation/#preparation |
Yes, I do |
MetalLB works correctly, because his function ends on publishing IP address to interface. But i cant reach this IP from K8s node, got "No route to host" |
kube-ipvs0 should be down and noarp. This is intentional and not a bug. kube-ipvs0 only purpose is to hold addresses that should be directed to ipvs. Compare with proxy-mode=iptables where the vip addresses are on no interface at all. Unfortunately there is no easy way to do that with ipvs, if it were, it would have been used and no kube-ipvs0 interface would have been defined. The intention is absolutely not to use proxy-arp to announce the addresses on the local segment. That leads to disaster since all nodes would do arp responses on the same addresses. Metallb in L2 mode does this in a controlled way by carefully selecting one node to answer neigbor requests (ipv4 and ipv6). |
What happened?
When the cluster initializes with kubeadm running, the kube-ipvs0 interface is set to NOARP and DOWN, so services in the cluster exposed through LoadBalancer are unavailable. Manual operations to set the ARP status of this interface to ON and the state to UP result in a state change to UNKNOWN and do not affect the availability of the service.
In this case 192.168.50.100/32 - IP address received from MetalLB to Nginx Ingress controller (Nginx defaultbackend should be reachable with this IP address)
What did you expect to happen?
Interface normally starts and all services can be reacheble from LAN.
How can we reproduce it (as minimally and precisely as possible)?
Create custer in bare metal using kubeadm
Anything else we need to know?
Linux parameters:
Kubernetes version
Cloud provider
VMware vSphere
OS version
Install tools
Container runtime (CRI) and and version (if applicable)
Related plugins (CNI, CSI, ...) and versions (if applicable)
The text was updated successfully, but these errors were encountered: