New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LowNodeUtilization does not take into account podAntiAffinity when choosing a pod to evict #335
Comments
Yes, descheduler won't check if pods will be properly evicted to other nodes before evicting, so I think it would make sense to add some mechanism to check this and turn to other pods if checks failed. |
/kind feature @ForbesLindesay thanks for opening this issue. In general it would be great if the descheduler had the ability to more intelligently select which pods to evict for all strategies. There is a desire to be able to import the kube-scheduler code into the descheduler which would allow leveraging the kube-scheduler's filtering and scoring capabilities. This has been discussed in #184 and #283 There is some work that needs to be completed before the kube-scheduler can be easily imported. See kubernetes/kubernetes#89930. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
fyi, the same issue happens with pods that are bound to specific availability zones due to persistent volume claims being only attachable in one specific zone. If such pods are evicted, they will always be scheduled to the same zone, which might mean that it will be the same node if there is only one node per zone. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle rotten |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
Still valid. |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
stale-bot is annoying.. @lixiang233 - can this issue be marked as a desired feature as to avoid needing to keep it alive (protection from stale-bot)? |
@ghostsquad I skimmed that thread, but maybe you could point me to what we need to do to mark this as desired? We have other issues that would benefit from that too |
/lifecycle frozen |
I have a node that is over-utilized and a node that is under-utilized. There is also a deployment with a pod on both nodes, and an anti-affinity rule that prevents those pods being scheduled on the same node. Unfortunately, each time the descheduler runs, it evicts the pod for this app, which is promptly re-scheduled on the same node, since it cannot be moved to the under-utilized node. There are other pods that could be chosen.
The text was updated successfully, but these errors were encountered: