Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Manage labels on nodes to allow having less static IPs than nodes #15

Closed
pdecat opened this issue Jun 21, 2018 · 13 comments
Closed

Manage labels on nodes to allow having less static IPs than nodes #15

pdecat opened this issue Jun 21, 2018 · 13 comments
Assignees

Comments

@pdecat
Copy link
Contributor

pdecat commented Jun 21, 2018

Documentation states that there must be at least as many static IP addresses as nodes.

I'd love to be able to use kubeip and have fewer static IP addresses than nodes.

Kubeip could probably add specific labels to the nodes and pods needing egress using these IPs could have nodeSelector or affinity constraints to be scheduled on those.

In my current use case, I currently have two NAT instances whose static IPs are whitelisted by several partners. Adding new IPs is not an easy task.

@WilliamDenniss
Copy link

In my testing you can still add more nodes, they just won't get static IPs. If you rely on all your nodes being whitelisted, then yes it won't work. +1 to label the nodes that receive the static IP. Being able to run mixed workloads on the same nodepool where only some require a static IP would be great!

I'd suggest adding a second label, which is the name of the IP itself. That way you could use nodeSelector to target a Pod to a specific IP which could serve other use-cases like exposing a service with HostPort.

@pdecat
Copy link
Contributor Author

pdecat commented Jun 22, 2018

Great, so it's just a matter of adding/removing labels, isn't it?

@pdecat pdecat changed the title Allow having less static IPs than nodes Manage labels on nodes to allow having less static IPs than nodes Jun 22, 2018
@spark2ignite spark2ignite assigned avivl and unassigned spark2ignite Jun 24, 2018
avivl added a commit that referenced this issue Jun 25, 2018
After assigning an IP address to a node kubeip will also crate a label for that node `kubip_assigned` with the value of the IP address (`.` are replaced with `_`)
@spark2ignite
Copy link
Contributor

@avivl - would be better to replace the dots with dashes as per K8s notion specified at https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pods

avivl added a commit that referenced this issue Jun 25, 2018
@avivl
Copy link
Contributor

avivl commented Jun 25, 2018

@pdecat @WilliamDenniss
After assigning an IP address to a node kubeip will also create a label for that node kubip_assigned with the value of the IP address (. are replaced with _)

@avivl avivl closed this as completed Jun 25, 2018
@pdecat
Copy link
Contributor Author

pdecat commented Jun 25, 2018

@avivl are the labels removed when a node fails and the IP was reassigned to another one?

@avivl
Copy link
Contributor

avivl commented Jun 25, 2018

@pdecat If the node will get a new IP then the value of the label will be overwritten

@pdecat
Copy link
Contributor Author

pdecat commented Jun 25, 2018

What if there are less reserved IP addresses than nodes?

@avivl
Copy link
Contributor

avivl commented Jun 25, 2018

@pdecat nothing. If there are less IP address then nodes then nothing will happen to this nodes. Once an IP will be available it will be assigned to one e of the nodes

@pdecat
Copy link
Contributor Author

pdecat commented Jun 25, 2018

It will still have a tag referencing an IP address that has been reassigned to another node, isn't it?

If so, pods can't rely on this label via nodeSelector or affinity constraints to be scheduled on those.

Note: perhaps that wasn't clear in my original description but that's what this issue is all about.

@WilliamDenniss
Copy link

@avivl are the labels removed when a node fails and the IP was reassigned to another one?

Ideally I think the kubeIP operator would validate that both labels are correct as part of the regular poll, and remove labels that are no longer correct. From a consistency standpoint this seems like the ideal behavior.

But I'm curious if this condition can actually happen today? I would think once assigned, it wouldn't be unassigned. Or are you suggesting that the user or some other process may re-assign it?

@pdecat
Copy link
Contributor Author

pdecat commented Jun 25, 2018

Oh, I presumed kubeip would do it for failing or upgraded (cordon, drain, reboot) nodes but now that you mention it, I realize it watches only for new/removed nodes.

I guess as long as failing and upgraded nodes keep their reserved IP address, this is covered.

@avivl
Copy link
Contributor

avivl commented Jun 26, 2018

@pdecat @WilliamDenniss just to be on the safe side, I'm now checking if there is a node with the ip tag and clear that tag

@pdecat
Copy link
Contributor Author

pdecat commented Jun 26, 2018

Great, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants