Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.Sign up
README: Add note about cluster-autoscaler not supporting multiple AZs #647
This is often said but not entirely true. We use
The mechanism/'issue' is just as explained. The
Because overall our ASGs and workloads are very AZ-balanced, even our soft Pod anti-affinity is almost always satisfied.
If your workloads are all single-AZ PVCs and hard anti-affinity requirements (e.g. etcd or other quorum hosting), then the advice to have single AZ node pools is of course completely valid.
@mgalgs Hey! Thanks for your contribution.
Yep, I believe that @whereisaaron's explanation is valid, too. You may already have read it, but for more context, I'm sharing the original discussion regarding the gotcha of CA: kubernetes-retired/contrib#1552 (comment)
Maybe we'd better add a dedicated section in the README for this?
I'm not a good writer but I'd propose something like the below as a foundation:
Ensure that you have a separate nodegroup per availability zone when your workload is zone-aware!
To create separate nodegroup per AZ, just replicate your
nodeGroups: - name: ng1-public instanceType: m5.xlarge # availabilityZones: ["eu-west-2a", "eu-west-2b"]
Cheers @mglags. Suggestions and explanation:
There is no need to do this. If your workload is not AZ-specific, then by definition is doesn't mind being re-balanced. This setting would be a work-around if have (unbalanced) AZ-specific requests that drive unbalanced ASG's and you don't want a re-balance undoing that. But that case you should be using per-AZ ASGs anyway, as your other criteria recommend.
'Soft' affinity requirements that use
You can certainly zero to and from zero node with a multi-AZ ASG - on AWS at least. This is because you can add the labels needed as node/affinity selectors as AWS tags on the ASG. The