New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix some typos in comment #1575
Conversation
JoeWrightss
commented
Jan 12, 2019
- "scheduable" to "schedulable".
- "SynchornizedBeforeSuite" to "SynchronizedBeforeSuite".
Signed-off-by: zhoulin xie <zhoulin.xie@daocloud.io>
/assign @bskiba |
@@ -142,4 +142,4 @@ If you'd like to scale node groups from 0, an `autoscaling:DescribeLaunchConfigu | |||
- Cluster autoscaler is not zone aware (for now), so if you wish to span multiple availability zones in your autoscaling groups beware that cluster autoscaler will not evenly distribute them. For more information, see https://github.com/kubernetes/contrib/pull/1552#discussion_r75532949. | |||
- By default, cluster autoscaler will not terminate nodes running pods in the kube-system namespace. You can override this default behaviour by passing in the `--skip-nodes-with-system-pods=false` flag. | |||
- By default, cluster autoscaler will wait 10 minutes between scale down operations, you can adjust this using the `--scale-down-delay-after-add`, `--scale-down-delay-after-delete`, and `--scale-down-delay-after-failure` flag. E.g. `--scale-down-delay-after-add=5m` to decrease the scale down delay to 5 minutes after a node has been added. | |||
- If you're running multiple ASGs, the `--expander` flag supports three options: `random`, `most-pods` and `least-waste`. `random` will expand a random ASG on scale up. `most-pods` will scale up the ASG that will scheduable the most amount of pods. `least-waste` will expand the ASG that will waste the least amount of CPU/MEM resources. In the event of a tie, cluster autoscaler will fall back to `random`. | |||
- If you're running multiple ASGs, the `--expander` flag supports three options: `random`, `most-pods` and `least-waste`. `random` will expand a random ASG on scale up. `most-pods` will scale up the ASG that will schedulable the most amount of pods. `least-waste` will expand the ASG that will waste the least amount of CPU/MEM resources. In the event of a tie, cluster autoscaler will fall back to `random`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
most-pods
will scale up the ASG that will schedulable the most amount of pods
This sentence doesn' t make sense. Can you fix it while you're at it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for guiding me, @bskiba🐯.
Signed-off-by: zhoulin xie <zhoulin.xie@daocloud.io>
Thanks! |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: bskiba, JoeWrightss The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
* [jobset] Add prebuilt workload support. * Review Remarks. * Review Remarks
add missing line in 1.23+ CA versions related old change - https://dev.azure.com/AzureContainerUpstream/Kubernetes/_git/autoscaler/commit/9308ae62d84a1cac564011db2ab3b4b570efecef?refName=refs%2Fheads%2Fcluster-autoscaler-release-1.21 Looks like this line was skipped in 1.23 CA + version Related work items: kubernetes#1575