Skip to content

Cluster Autoscaler gets OOMKilled instead of scaling down #7873

Open
@MenD32

Description

@MenD32

Which component are you using?:
/area cluster-autoscaler

What version of the component are you using?:

v1.26.2

Component version:

What k8s version are you using (kubectl version)?:
1.30.0

kubectl version Output
$ kubectl version

What environment is this in?:
AWS

What did you expect to happen?:
Cluster autoscaler to scale down nodes when no longer needed

What happened instead?:
Cluster autoscaler was getting OOMKilled (We where using about ~340 nodes, with 0.6G memory limit).
When a new node of cluster autoscaler gets created, the node cooldown before scaling down gets back to 0s, Cluster Autoscaler was continuously getting OOMKilled after 1 loop and thus never managing to scale down nodes.

How to reproduce it (as minimally and precisely as possible):
Install Cluster Autoscaler with a memory limit, scale up nodes within the existing node groups until it gets OOMKilled .

Anything else we need to know?:

Getting OOMKilled from ~340 nodes with 0.6G ram limit is very surprising, but the thing that made this bug truly devastating, is that is was able to run 1 loop, which could've scaled down the nodes, but the timer had restarted, this makes me question the HA-ness of cluster autoscaler. I'd like to submit a fix where this data is added as an annotation to the node itself, thus making the deployment stateless, in that regard.

relevant configs are that the memory limit was 600mb, and that cooldown on down scaling was 10m.

Activity

added
kind/bugCategorizes issue or PR as related to a bug.
on Feb 26, 2025
linked a pull request that will close this issue on Apr 24, 2025
k8s-triage-robot

k8s-triage-robot commented on May 27, 2025

@k8s-triage-robot

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

added
lifecycle/staleDenotes an issue or PR has remained open with no activity and has become stale.
on May 27, 2025
MenD32

MenD32 commented on May 28, 2025

@MenD32
ContributorAuthor

/remove-lifecycle stale

removed
lifecycle/staleDenotes an issue or PR has remained open with no activity and has become stale.
on May 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

      Participants

      @k8s-ci-robot@k8s-triage-robot@MenD32

      Issue actions

        Cluster Autoscaler gets OOMKilled instead of scaling down · Issue #7873 · kubernetes/autoscaler