-
Notifications
You must be signed in to change notification settings - Fork 662
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Would like a descheduler that evicts pods have preferred node anti-affinity #1385
Comments
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Is your feature request related to a problem? Please describe.
I typically try to schedule multiple replicas of pods across worker nodes. This helps reduce impact if a node is lost. I typically use preferred so that if I do lose a node, I will maintain my desired replica count. If a node is lost and no more are available, Kube will reschedule the pod on a node that already has one. When a new node is added back to the cluster, the descheduler does not evict one of the 2 pods on a single node so it can be rescheduled back to the new 3rd node.
Describe the solution you'd like
My deployment spreads 3 pods across 3 nodes. I have the above preferred anti-affinity configuration. When a node is lost, a new pod is created on one of the 2 remaining nodes (preferred). When a 3rd node is reintroduced into the cluster, I want the descheduler to evict one of the 2 pods running on a single node so it can be rescheduled onto the new 3rd node.
Describe alternatives you've considered
Manually killing the pod.
What version of descheduler are you using?
descheduler version: 0.29
Additional context
The text was updated successfully, but these errors were encountered: