New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Manage labels on nodes to allow having less static IPs than nodes #15
Comments
In my testing you can still add more nodes, they just won't get static IPs. If you rely on all your nodes being whitelisted, then yes it won't work. +1 to label the nodes that receive the static IP. Being able to run mixed workloads on the same nodepool where only some require a static IP would be great! I'd suggest adding a second label, which is the name of the IP itself. That way you could use nodeSelector to target a Pod to a specific IP which could serve other use-cases like exposing a service with HostPort. |
Great, so it's just a matter of adding/removing labels, isn't it? |
@avivl - would be better to replace the dots with dashes as per K8s notion specified at https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pods |
@pdecat @WilliamDenniss |
@avivl are the labels removed when a node fails and the IP was reassigned to another one? |
@pdecat If the node will get a new IP then the value of the label will be overwritten |
What if there are less reserved IP addresses than nodes? |
@pdecat nothing. If there are less IP address then nodes then nothing will happen to this nodes. Once an IP will be available it will be assigned to one e of the nodes |
It will still have a tag referencing an IP address that has been reassigned to another node, isn't it? If so, pods can't rely on this label via nodeSelector or affinity constraints to be scheduled on those. Note: perhaps that wasn't clear in my original description but that's what this issue is all about. |
Ideally I think the kubeIP operator would validate that both labels are correct as part of the regular poll, and remove labels that are no longer correct. From a consistency standpoint this seems like the ideal behavior. But I'm curious if this condition can actually happen today? I would think once assigned, it wouldn't be unassigned. Or are you suggesting that the user or some other process may re-assign it? |
Oh, I presumed kubeip would do it for failing or upgraded (cordon, drain, reboot) nodes but now that you mention it, I realize it watches only for new/removed nodes. I guess as long as failing and upgraded nodes keep their reserved IP address, this is covered. |
@pdecat @WilliamDenniss just to be on the safe side, I'm now checking if there is a node with the ip tag and clear that tag |
Great, thanks! |
Documentation states that there must be at least as many static IP addresses as nodes.
I'd love to be able to use kubeip and have fewer static IP addresses than nodes.
Kubeip could probably add specific labels to the nodes and pods needing egress using these IPs could have nodeSelector or affinity constraints to be scheduled on those.
In my current use case, I currently have two NAT instances whose static IPs are whitelisted by several partners. Adding new IPs is not an easy task.
The text was updated successfully, but these errors were encountered: