Skip to content

drain-priority-config doesn't work #7851

@ajvn

Description

@ajvn

Which component are you using?:
/area cluster-autoscaler

What version of the component are you using?:

Component version: 1.29.4

What k8s version are you using (kubectl version)?:

kubectl version
$ kubectl version
v1.29.12

What environment is this in?:

Amazon EKS

What did you expect to happen?:

Daemonsets pods are evicted after those which have lower priority class, or no priority class.

What happened instead?:

All of the pods are evicted at the same time, then those critical daemonsets pods are redeployed, and not gracefully removed.

How to reproduce it (as minimally and precisely as possible):

CA arguments:

- ./cluster-autoscaler
            - --v=4
            - --stderrthreshold=info
            - --cloud-provider=aws
            - --skip-nodes-with-local-storage=false
            - --expander=least-waste
            - --node-group-auto-discovery=<relevant-tags>
            - --scale-down-utilization-threshold=0.7
            - --balance-similar-node-groups
            - --skip-nodes-with-system-pods=false
            - --startup-taint=node.cilium.io/agent-not-ready            
            - --drain-priority-config='2000001000:120,0:60'

Anything else we need to know?:
Daemonsets pod specification contains proper priority class, in case that's a requirement.

This leads me to believe that this is supported in 1.29 version as well:

Let me know if anything else is needed.
Thank you.

Metadata

Metadata

Assignees

No one assigned

    Labels

    area/cluster-autoscalerkind/bugCategorizes issue or PR as related to a bug.lifecycle/staleDenotes an issue or PR has remained open with no activity and has become stale.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions