unable to scale down when pods with pv as ephemeral storage are present #6710
Labels
area/cluster-autoscaler
kind/bug
Categorizes issue or PR as related to a bug.
lifecycle/stale
Denotes an issue or PR has remained open with no activity and has become stale.
Which component are you using?: cluster-autoscaler installed with helm chart
What version of the component are you using?: Chart version: 9.34.0, Image: v1.28.2
Component version:
What k8s version are you using (
kubectl version
)?:kubectl version
OutputWhat environment is this in?: AWS EKS
What did you expect to happen?: Scale down should work fine as the volumes are used as ephemeral store https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes
What happened instead?:
"PersistentVolume and node mismatch for pod"
andno matching NodeSelectorTerms
and the pod being unable to be rescheduled even when the cluster has capacity.How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
The node group is a multi-AZ one, but I have set
skip-nodes-with-local-storage=false
and also addedcluster-autoscaler.kubernetes.io/safe-to-evict: "true"
annotation to the pod, but I just get the mismatch error, before I did not received even that properly.I do not care about the AZ which the pod gets assigned to as the volumes are just used as ephemeral/temporary storage, what I care about is costs, if there are 1 node in each AZ and can be combined to a single node in one of the AZs, it is fine.
Example volume ref:
Since I am using a helm chart to create such a deployment, will creating the persistent volume explicitly with specific labels, and then referring in the volumeClaimTemplate help?
The text was updated successfully, but these errors were encountered: