New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NodePort will not listen on external IP if same IP is used as loadBalancerIP #114815
Comments
/sig network |
/assign |
From slack:
This is not a valid setup. A virtual IP (VIP), like the loadBalancerIP, must not be assigned to a physical interface. The reason to assign a VIP to a physical interface would be to attract traffic with L2 mechanisms, ARP for IPv4 and neighbor discovery for IPv6, but this way is flawed for several reasons. That's why you use metallb in L2 mode instead. Metallb in L2 mode answers L2 requests without assigning the VIP to any interface. |
When you send traffic to a nodePort, the destination address should be a node address, not a loadBalancerIP. |
There is however a change in the implementation (I will try to find the PR) in the way the nodePort addresses are computed. This is how i should be;
On What I suspect that pre v1.23 K8s did this wrong by adding all addresses on interfaces except kube-ipvs0 (including loopback which would not work). |
These loadBalancerIP addresses are just the external IP of my nodes, which I use in metallb as an available pool for service(loadbalancer). Provider automatically assigns this address to the physical interface as a lease, I don't really have the option of a different setup at the moment.
Ye, I know. In my case the public IP address pool for node and loadBalancerIP addresses are the same |
So, the loadBalancerIPs assigned by metallb is actually the real addresses of your K8s nodes? |
Yes, these are public IP addresses (each node has its own), which I use as an entry point for some services(loadbalancer) |
@thockin @danwinship Is this setup supported? @adr-xyt Thanks for explaining. I understand your setup now, . I am unsure if it's supposed to work, so I have to ask. Have you tested proxy-mode=iptables? It may work, but I wouldn't be suprised if it doesn't. |
I will try to test it with iptables, but I use ipvs because of specific services, so I can use the load balancing algorithm - least connection... this method won't solve my problem :( |
Found the PR #101429 |
Why do you need nodePorts services at all? Can't you use only loadBalancer? NodePort is mainly intended to be used by and external load-balancer, i.e. like in AWS, but when you manage your own cluster you usually don't have an external loadBalancer so use only loadBalancerIPs makes sense. We do that, and even disable nodePort allocation. |
Scratch that! In your case where the node addresses are the external addresses you should use NodePort only. If you use your node addresses as loadBalancerIP you will hit this PR #108460 when you upgrade to later K8s versions. It inserts a blocking rule in the INPUT chain for loadBalancerIPs. WARNINGThis will block input to your node if loadBalancerIP==nodeIP. |
I understand, thanks for the info. In my case it's a breaking change and I will have to migrate some services to NodePort before kubernetes upgrade on prod environment. |
with proxy-mode=iptables, everything started working, node is listening on nodePort and I can get to loadBalancerIP, I cleared the rules |
Port 31010 used in the LoadBakancer service example is within the default NodePort range. I think it can be converted to a NodePort service without affecting external users. Or is 31010 just an example? Or are there other reasons for type:LoadBalancer? |
/area ipvs |
This is just an example, I won't have much trouble with the migration. At most, I will divide the available lease of IP addresses into smaller pools, for NodePort / Loadblancer services. |
It is a little "unusal" to use the host IP as the LB IP, but it's unfortunate that we broke something that used to work. https://www.hyrumslaw.com/ Lars, is there any path back to "working" that doesn't lose the protections in that linked PR? I could easily see iptables also breaking this.. |
To not exclude loadBalancerIPs (and ecternalIPs) when they are on a physical interface is not hard, but messes up the nice set-operation. The access protection in #108460 It should basically not be there if the any loadBalancerIPs (and ecternalIPs) is also on aphysical interface. Can be done, but may be messy. |
/triage accepted Even though this issue is accepted, it's still under investigation. I will check the code if the old bevaivor can be restored in a fairly simple way. But there is a problem with back-port. If I make a PR now it will be in v1.26.x or v1.27 but the function was broken in v1.23 so a fix should really be back-ported all the way back. I would prefer to document that a real address on an interface on the node can't be used as a virtual address, like loadBalancerIP or externalIP. |
Just wanted to chime in and say that I've also encountered this issue... very grateful to find the discussion and looking forward to a solution. Thank you all! Also just to share, I've tried the following 3 workarounds and they all work in my case:
|
@adr-xyt I backed down to K8s v1.22.4 and I can't get it to work unless you have the master outside the cluster (i.e. not a node), or have a setup where the external-ip you set as loadBalancerIP is not on the main K8s network, something like; K8s is using the "backend" network, the "external" ips are on the "frontend" network. The reason is when you assign a loadBalancerIP, that address will be assigned to the
The The result is that "vm-002" (e.g.
|
I discovered this when testing PR #115019. I think I can restore the pre v1.23 behavior now, but I suffer from the problem described above. |
@chlam4 I don't think your problem is precisely the same. To disable ipv6 or set nodePortAddresses can't really be used as work-arounds for this issue. |
Thank you, @uablrek. You're right about IPv6 - disabling it didn't really help. I forgot to remove the
It is called from the folllowing, but only if
We're planning to roll out this workaround before upgrading to a future Kubernetes release with your fix. We would much appreciate it if you could shed more light why it wouldn't work. We didn't want to change to iptables mode if this second workaround really works. Thank you so much! |
@chlam4 You are right. To explicitly set Still the loadBalancerIP (which is also an address on a node) can't be used by the main k8s network, or the master must not be a node in the cluster as described in #114815 (comment). And... WarningThis will work up to and including K8s v1.25.x, but NOT v1.26.0! I haven't checked yet but I think it's because of #108460. |
Since the loadBalancerIP that is owned by a node is assigned to
|
From my perspective, this is not a big problem, I have the whole architecture implemented in separate networks. I think it's worth adding this information somewhere in the documentation. |
@uablrek Thank you so much for the response, epesically the info about v1.26. We'll check it out for sure and will report back. Our node port use case is for single-node deployments only, so luckily there are no other nodes. |
What happened?
Hey, after upgrading kubernetes from 1.22 to 1.23, I'm experiencing strange service provisioning behavior using nodePort. I'm using kube-proxy (ipvs) + calico + metallb.
If external IP is used by another service type (Loadbalancer - allocated by metallb), then the node owning this IP doesn't listen to NodePort on this interface.
Unfortunately, I can't find any changes that could be causing this, any ideas?
Kubernetes 1.23
Missing externalip1 here:
What did you expect to happen?
Using k8s 1.22 service with nodeport caused listening on every interface, despite using external IP by another service.
Kubernetes 1.22
How can we reproduce it (as minimally and precisely as possible)?
Upgrade k8s from 1.22.15 to 1.23.15
Anything else we need to know?
No response
Kubernetes version
Cloud provider
OS version
Install tools
Container runtime (CRI) and version (if applicable)
Related plugins (CNI, CSI, ...) and versions (if applicable)
metallb v0.13.7
ipset v7.5, protocol version: 7
ipvsadm v1.31 2019/12/24 (compiled with popt and IPVS v1.2.1)
#ipvs config:
The text was updated successfully, but these errors were encountered: