Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for node taints #91

Closed
OperationalDev opened this issue Feb 21, 2023 · 3 comments · Fixed by #113
Closed

Add support for node taints #91

OperationalDev opened this issue Feb 21, 2023 · 3 comments · Fixed by #113

Comments

@OperationalDev
Copy link
Contributor

OperationalDev commented Feb 21, 2023

It would be nice if we could add support for node taints.
e.g. If a node has been cordoned and is set to no schedule/execute, it should be possible to exclude this.

e.g.

kubectl get nodes
NAME
example-node-1    Ready,SchedulingDisabled   <none>          425d   v1.24.6
example-node-2    Ready                      <none>          227d   v1.24.6

kube-capacity

NODE              CPU REQUESTS    CPU LIMITS    MEMORY REQUESTS    MEMORY LIMITS
*                 560m (28%)      130m (7%)     572Mi (9%)         770Mi (13%)
example-node-1    220m (22%)      10m (1%)      192Mi (6%)         360Mi (12%)
example-node-2    340m (34%)      120m (12%)    380Mi (13%)        410Mi (14%)

Now if we exclude cordoned nodes:

kube-capacity --exclude-noschedule-nodes

NODE              CPU REQUESTS    CPU LIMITS    MEMORY REQUESTS    MEMORY LIMITS
*                 340m (34%)      132m (12%)    380Mi (13%)        410Mi (13%)
example-node-2    340m (34%)      120m (12%)    380Mi (13%)        10Mi (14%)

We can see have less capacity available than we thought.

@KR411-prog
Copy link

@OperationalDev .. have a query on your issue.
Cordoned nodes do not schedule new pods in it. But how is removing the cordoned node from the kube-capacity report, helps in showing change in capacity ? How does request and limit change in a node by not scheduling new pods in it?

@OperationalDev
Copy link
Contributor Author

OperationalDev commented Mar 2, 2023

If you cannot schedule pods on a node, then you cannot use that capacity.
If I have 10 worker nodes, each with 2CPU available, then my capacity is 20CPU. But if 5 of the nodes are cordoned and unschedulable, then my capacity is only 10CPU, because I cannot schedule pods on the 5 nodes that have been cordoned. kube-capacity in this instance will show me I have 20CPU worth of capacity available.

In a perfect world, nodes should not be cordoned for an extended period of time, but in my scenario, nodes can be cordoned for a couple of days and this can lead to running out of capacity.

@barrykp
Copy link
Contributor

barrykp commented Aug 25, 2023

I would like this feature as well: to be able to exclude tainted nodes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants