Description
Which component are you using?:
/area cluster-autoscaler
What version of the component are you using?:
v1.26.2
Component version:
What k8s version are you using (kubectl version
)?:
1.30.0
kubectl version
Output
$ kubectl version
What environment is this in?:
AWS
What did you expect to happen?:
Cluster autoscaler to scale down nodes when no longer needed
What happened instead?:
Cluster autoscaler was getting OOMKilled (We where using about ~340 nodes, with 0.6G memory limit).
When a new node of cluster autoscaler gets created, the node cooldown before scaling down gets back to 0s, Cluster Autoscaler was continuously getting OOMKilled after 1 loop and thus never managing to scale down nodes.
How to reproduce it (as minimally and precisely as possible):
Install Cluster Autoscaler with a memory limit, scale up nodes within the existing node groups until it gets OOMKilled .
Anything else we need to know?:
Getting OOMKilled from ~340 nodes with 0.6G ram limit is very surprising, but the thing that made this bug truly devastating, is that is was able to run 1 loop, which could've scaled down the nodes, but the timer had restarted, this makes me question the HA-ness of cluster autoscaler. I'd like to submit a fix where this data is added as an annotation to the node itself, thus making the deployment stateless, in that regard.
relevant configs are that the memory limit was 600mb, and that cooldown on down scaling was 10m.
Activity
k8s-triage-robot commentedon May 27, 2025
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied,lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
MenD32 commentedon May 28, 2025
/remove-lifecycle stale