You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 17, 2019. It is now read-only.
We are receiving request to provide an option to grow and shrink node groups more evenly - in many cases the node groups are in different zones/subregions but contain the same type of machines. Having the inbalance in the node groups increase the max severity of a node group outage.
The text was updated successfully, but these errors were encountered:
AFAIK, CA as of today already randomly choose an ASG from candidates(=ASGs which serves currently unschedulble pods hence to be scaled out).
If that's true, what you meant is to make CA first tries to choose the ASG with the fewest nodes, right?
Assuming my assumption above is correct, I'm going to implement the balance expander.
It will try to balance number of nodes among target node groups by returning an expander.Option with the least NodeCount.
We are receiving request to provide an option to grow and shrink node groups more evenly - in many cases the node groups are in different zones/subregions but contain the same type of machines. Having the inbalance in the node groups increase the max severity of a node group outage.
The text was updated successfully, but these errors were encountered: