Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

scale down issue with scale-down-utilization-threshold at 0 #6791

Open
ut0mt8 opened this issue May 3, 2024 · 4 comments
Open

scale down issue with scale-down-utilization-threshold at 0 #6791

ut0mt8 opened this issue May 3, 2024 · 4 comments
Labels
area/cluster-autoscaler area/provider/aws Issues or PRs related to aws provider kind/bug Categorizes issue or PR as related to a bug.

Comments

@ut0mt8
Copy link

ut0mt8 commented May 3, 2024

Which component are you using?:

cluster-autoscaller

v1.29.0

Component version:

What k8s version are you using (kubectl version)?:

Server Version: version.Info{Major:"1", Minor:"26+", GitVersion:"v1.26.14-eks-b9c9ed7", GitCommit:"7c3f2be51edd9fa5727b6ecc2c3fc3c578aa02ca", GitTreeState:"clean", BuildDate:"2024-03-02T03:46:35Z", GoVersion:"go1.21.7", Compiler:"gc", Platform:"linux/amd64"}

What environment is this in?:

in EKS/AWS
launch with args like this:

        - ./cluster-autoscaler
        - --cloud-provider=aws
        - --namespace=kube-system
        - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/cluster
        - --balance-similar-node-groups=true
        - --expander=least-waste
        - --ignore-daemonsets-utilization=true
        - --logtostderr=true
        - --scale-down-unneeded-time=5m
        - --scale-down-unready-time=5m
        - --scale-down-utilization-threshold=0 <======
        - --skip-nodes-with-local-storage=false
        - --skip-nodes-with-system-pods=false
        - --stderrthreshold=info
        - --v=4

What did you expect to happen?:

When nodes are empty (meaning no pods from deployment) scale down happening

What happened instead?:

Something prevent nodes to scale down : see this spurious log :

unremovable: memory requested (0% of allocatable) is above the scale-down utilization threshold

on one of the candidate node.

How to reproduce it (as minimally and precisely as possible):

Nothing more to add. Below config should be sufficient.

Anything else we need to know?:

putting 0.01 for scale-down-utilization-threshold seems to works but it's a bit counter intuitive. and what we want actually is that cluster autoscaller dont' care about resource but just scale down empty nodes. I wonder why such a complex heuristics?

@ut0mt8 ut0mt8 added the kind/bug Categorizes issue or PR as related to a bug. label May 3, 2024
@leoryu
Copy link

leoryu commented May 6, 2024

I'm having the same issue as well, this is releated code:

if utilInfo.Utilization >= threshold {

@Shubham82
Copy link
Contributor

/area provider/aws
/area cluster-autoscaler

@k8s-ci-robot k8s-ci-robot added area/provider/aws Issues or PRs related to aws provider area/cluster-autoscaler labels Jun 4, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 2, 2024
@Shubham82
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cluster-autoscaler area/provider/aws Issues or PRs related to aws provider kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

5 participants