-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Description
Which component are you using?:
/area cluster-autoscaler
What version of the component are you using?:
Component version: 1.29.4
What k8s version are you using (kubectl version)?:
kubectl version
$ kubectl version v1.29.12
What environment is this in?:
Amazon EKS
What did you expect to happen?:
Daemonsets pods are evicted after those which have lower priority class, or no priority class.
What happened instead?:
All of the pods are evicted at the same time, then those critical daemonsets pods are redeployed, and not gracefully removed.
How to reproduce it (as minimally and precisely as possible):
CA arguments:
- ./cluster-autoscaler
- --v=4
- --stderrthreshold=info
- --cloud-provider=aws
- --skip-nodes-with-local-storage=false
- --expander=least-waste
- --node-group-auto-discovery=<relevant-tags>
- --scale-down-utilization-threshold=0.7
- --balance-similar-node-groups
- --skip-nodes-with-system-pods=false
- --startup-taint=node.cilium.io/agent-not-ready
- --drain-priority-config='2000001000:120,0:60'
Anything else we need to know?:
Daemonsets pod specification contains proper priority class, in case that's a requirement.
This leads me to believe that this is supported in 1.29 version as well:
Let me know if anything else is needed.
Thank you.