scale down issue with scale-down-utilization-threshold at 0 #6791
Labels
area/cluster-autoscaler
area/provider/aws
Issues or PRs related to aws provider
kind/bug
Categorizes issue or PR as related to a bug.
Which component are you using?:
cluster-autoscaller
v1.29.0
Component version:
What k8s version are you using (
kubectl version
)?:What environment is this in?:
in EKS/AWS
launch with args like this:
What did you expect to happen?:
When nodes are empty (meaning no pods from deployment) scale down happening
What happened instead?:
Something prevent nodes to scale down : see this spurious log :
on one of the candidate node.
How to reproduce it (as minimally and precisely as possible):
Nothing more to add. Below config should be sufficient.
Anything else we need to know?:
putting 0.01 for scale-down-utilization-threshold seems to works but it's a bit counter intuitive. and what we want actually is that cluster autoscaller dont' care about resource but just scale down empty nodes. I wonder why such a complex heuristics?
The text was updated successfully, but these errors were encountered: