-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix/aws asg unsafe decommission 5829 #6818
Fix/aws asg unsafe decommission 5829 #6818
Conversation
This merge resolves the issue where the Kubernetes Cluster Autoscaler incorrectly decommissions actual instances instead of placeholders within AWS ASGs. The fix ensures that only placeholders are considered for scaling down when recent scaling activities fail, thereby preventing the accidental removal of active nodes. Enhanced unit tests and checks are included to ensure robustness. Fixes #5829
This merge resolves the issue where the Kubernetes Cluster Autoscaler incorrectly decommissions actual instances instead of placeholders within AWS ASGs. The fix ensures that only placeholders are considered for scaling down when recent scaling activities fail, thereby preventing the accidental removal of active nodes. Enhanced unit tests and checks are included to ensure robustness. Fixes #5829
Keywords which can automatically close issues and at(@) or hashtag(#) mentions are not allowed in commit messages. The list of commits with invalid commit messages:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
Hi @ruiscosta. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: ruiscosta The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/ok-to-test |
@aaroniscode: Cannot trigger testing until a trusted user reviews the PR and leaves an In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
klog.V(4).Infof("instance %s is detected as a placeholder, decreasing ASG requested size instead "+ | ||
"of deleting instance", instance.Name) | ||
m.decreaseAsgSizeByOneNoLock(commonAsg) | ||
if !recentScalingActivitySuccess { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @ruiscosta I don't believe this actually solves the problem. Consider the following scenario:
- We scale up the ASG by 10 instances. AWS creates 3 of them, and then fails on the remaining 7.
DeleteInstances
is called with those 10 instances; 7 of them are placeholders- In lines 321 we check to see if the most recent scaling activity was successful or not, which returns false since 7 instances could not be created.
- Now for each instance in the loop, we decrease the ASG size by one, which reduces the ASG size by 7.
- In between the check in line 321 and (say) the 5th iteration of the loop, AWS launches a new instance, which joins the cluster. Our information about what instances are actually placeholders is now out of date, and we get the same problem that we had before.
We "could" check the recent scaling activity in every iteration of the loop, at the expense of making a lot more API calls, which I think is undesirable, and is still subject to a race between when you make the check and when you change the ASG size.
/ok-to-test |
/easycla |
This merge resolves an issue in the Kubernetes Cluster Autoscaler where actual instances within AWS Auto Scaling Groups (ASGs) were incorrectly decommissioned instead of placeholders. The updates ensure that placeholders are exclusively targeted for scaling down under conditions where recent scaling activities have failed. This prevents the accidental termination of active nodes and enhances the reliability of the autoscaler in AWS environments. Fixes kubernetes#5829
This change expands on pr kubernetes#6818 This merge resolves an issue in the Kubernetes Cluster Autoscaler where actual instances within AWS Auto Scaling Groups (ASGs) were incorrectly decommissioned instead of placeholders. The updates ensure that placeholders are exclusively targeted for scaling down under conditions where recent scaling activities have failed. This prevents the accidental termination of active nodes and enhances the reliability of the autoscaler in AWS environments. Fixes kubernetes#5829
Merge branch 'fix/aws-asg-placeholder-decommission'
This merge resolves an issue in the Kubernetes Cluster Autoscaler where actual instances within AWS Auto Scaling Groups (ASGs) were incorrectly decommissioned instead of placeholders. The updates ensure that placeholders are exclusively targeted for scaling down under conditions where recent scaling activities have failed. This prevents the accidental termination of active nodes and enhances the reliability of the autoscaler in AWS environments.
Key improvements include:
Fixes #5829
What type of PR is this?
/kind bug
What this PR does / why we need it:
This PR prevents the Kubernetes Cluster Autoscaler from erroneously decommissioning actual nodes during scale-down operations in AWS environments, which could lead to unintended service disruptions.
Which issue(s) this PR fixes:
Fixes #5829
Special notes for your reviewer:
Does this PR introduce a user-facing change?