New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[aws-eks] Unexpected node replacement due to using SSM for AMIs #7273
Comments
Thanks for reporting. I can see why this can cause a pretty major headache. We will look into this. |
@stefanolczak The root cause is the update policy of the ASG. Currently, the We are considering to change this default, but in the meantime, you can specify an update policy of your own to avoid this: import * as autoscaling from '@aws-cdk/aws-autoscaling';
this.cluster.addCapacity('Nodes', {
instanceType: new ec2.InstanceType('t2.medium'),
minCapacity: 3,
updateType: autoscaling.UpdateType.NONE,
}); |
…e of ASG (#9746) This might be the PR with the highest explanation/code ratio i've ever made :) When a value changes for an AMI in a managed SSM store parameter, it should not cause a replacement of the ASG nodes. The reasoning is that managed params can change over time with no control on the user's part. Because of this, the change will not be reflected in `cdk diff` and creates a situation where every deployment can potentially cause node replacement without notice. There are two scenarios in which the cluster interacts with an `AutoScalingGroup` ### `addCapacity` When one uses `cluster.addCapacity`, we implicitly create an `AutoScalingGroup` that uses either the `BottleRocketImage` or the `EksOptimizedImage` as the machine image, with no option to customize it. Both these images fetch their AMI's from a managed SSM parameter (`/aws/service/eks/optimized-ami` or `/aws/service/bottlerocket`). This means that we create the situation described above by **default**. https://github.com/aws/aws-cdk/blob/5af718bab8522f1a4e7f70e7221f4878a15aa4a4/packages/%40aws-cdk/aws-eks/lib/cluster.ts#L779-L785 Seems like a more reasonable default in this case would be to use `UpdateType.NONE` instead of `UpdateType.RollingUpdate`. Note that in such a case, even if the user explicitly changes the machine image configuration (by specifying a different `machineImageType`), node replacement will not occur, even though `cdk diff` will clearly show a configuration change. In any case, the `updateType` can always be explicitly passed to mitigate any issue caused by the default behavior. ### `addAutoScalingGroup` When one uses `cluster.addAutoScalingGroup`, the `AutoScalingGroup` is created by the user. The default value for `updateType` in the `AutoScalingGroup` construct is `UpdateType.NONE`, so unless the user explicitly configured `UpdateType.RollingUpdate` - node replacement should not occur. Having said that, when a user specifies `UpdateType.RollingUpdate`, its not super intuitive that this update might happen without any explicit configuration change, and in fact this is actually documented in the images that use SSM to fetch the API: https://github.com/aws/aws-cdk/blob/5af718bab8522f1a4e7f70e7221f4878a15aa4a4/packages/%40aws-cdk/aws-ec2/lib/machine-image.ts#L216-L226 ------------------------------------------- There is no way for us to selectively apply the update policy, we either dont use it at all, meaning intentional user changes won't replace nodes as well, or we use it for all, meaning implicit changes will cause it. Ideally, we should consider moving away from using these managed SSM params in launch configurations, but that requires some additional investigation. The PR simply suggests to remove the `UpdateType.RollingUpdate` default from the `addCapacity` method, as a form of balance between all the considerations mentioned above. Fixes #7273 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
#4156 introduced retrieving AMI id for EKS worker nodes from SSM by using AWS::SSM::Parameter resource in CF template. It's a great feature when it comes to new deployments but it causes troubles when updating currently deployed stacks with EKS worker nodes. Value of the parameter changes when AWS releases new version of the AMI. For example yesterday there was a change from amazon-eks-node-1.14-v20200312 to amazon-eks-node-1.14-v20200406. The main problem is that this change is not shown in cdk diff. But if we change anything else in the stack, even something irrelevant then Cloudformation implicitly updates all Launch Configurations used by AutoScalingGroups of EKS worker nodes. That results in replacement of every EC2 ( EKS worked node ) of every ASG without any notification before deployment. Its really frustrating because we don't know what will be updated during the deployment and replacing every node is big thing that we should be noticed of before the deployment starts. Can we at least detect the change in cdk diff?
Reproduction Steps
Error Log
Environment
Other
This is 🐛 Bug Report
The text was updated successfully, but these errors were encountered: