-
Notifications
You must be signed in to change notification settings - Fork 4.1k
fix: Cluster Autoscaler not scaling down nodes where Pods with hard topology spread constraints are scheduled #8164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Cluster Autoscaler not scaling down nodes where Pods with hard topology spread constraints are scheduled #8164
Conversation
Signed-off-by: MenD32 <amit.mendelevitch@gmail.com>
@MenD32: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Hi @MenD32. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
…constraints Signed-off-by: MenD32 <amit.mendelevitch@gmail.com>
@@ -226,6 +226,7 @@ func (r *RemovalSimulator) findPlaceFor(removedNode string, pods []*apiv1.Pod, n | |||
klog.Errorf("Simulating removal of %s/%s return error; %v", pod.Namespace, pod.Name, err) | |||
} | |||
} | |||
r.clusterSnapshot.RemoveNodeInfo(removedNode) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Could you add a comment explaining that this is necessary for pod topology spread to work correctly?
Thanks for picking up the fix! Could you fill out the "Does this PR introduce a user-facing change?" section? This PR does change behavior, CA will now scale down in scenarios where it previously wouldn't. /ok-to-test |
Signed-off-by: MenD32 <amit.mendelevitch@gmail.com>
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: MenD32, towca The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
/kind fix
What this PR does / why we need it:
Nodes are mistakenly marked as unremovable with specific configurations of topology spread constraints
Which issue(s) this PR fixes:
Fixes #8093
Fixes #8162
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: