Branch: master
Find file History
k8s-ci-robot Merge pull request #1679 from lsytj0413/random-expander
fix(expander): avoid panic when random expander
Latest commit 1599bac Feb 13, 2019
Permalink
Type Name Latest commit message Commit time
..
Failed to load latest commit information.
Godeps Update godeps Nov 26, 2018
_override Upgrade AKS API for v19.1.1 to fix AAD bug Nov 19, 2018
cloudprovider Update AWS documentation and CA version in examples Feb 12, 2019
clusterstate Extend backoff interface with NodeInfo and error information Jan 9, 2019
config Added a new method that uses pod packing to filter schedulable pods Jan 25, 2019
context Use k8s.io/klog instead github.com/golang/glog Nov 26, 2018
core Cache exemplar ready node for each node group Feb 11, 2019
estimator Account for kernel reserved memory in capacity calculations Feb 8, 2019
expander fix(expander): avoid panic when random expander Feb 13, 2019
metrics Tainting unneeded nodes as PreferNoSchedule Jan 21, 2019
processors Fix some typos Dec 3, 2018
proposals Fix some spelling errors Dec 7, 2018
simulator Fix error format strings according to best practices from CodeReviewC… Jan 10, 2019
utils Account for kernel reserved memory in capacity calculations Feb 8, 2019
vendor Update godeps Nov 26, 2018
.gitignore Cluster-Autoscaler - Kubernetes client deps Apr 20, 2016
Dockerfile Update base debian image for Cluster Autoscaler Dec 5, 2018
FAQ.md Correct apiVersion in dynamic provisionning 1.11 Feb 7, 2019
Makefile Keep one place where default base image for Cluster Austoscaler is de… Dec 7, 2018
OWNERS Update OWNERS Sep 18, 2018
README.md Remove mention of the rescheduler Dec 21, 2018
cloudbuild.yaml Add GCB config for cluster-autoscaler Feb 16, 2018
fix_gopath.sh Update fix_gopath.sh with new overrides Jul 26, 2018
kubernetes.sync Update godeps Nov 26, 2018
main.go Account for kernel reserved memory in capacity calculations Feb 8, 2019
main_test.go Fix error format strings according to best practices from CodeReviewC… Jan 10, 2019
push_image.sh Pushes go to staging-k8s.gcr.io Jan 17, 2018
run.sh Cluster-Autoscaler: added wrapper script to pass signals Feb 28, 2017
update_toc.py Fix update_toc.py script to stop appending empty lines Jun 30, 2017
version.go Cluster Autoscaler 1.3.0-beta.1 Jun 11, 2018

README.md

Cluster Autoscaler

Introduction

Cluster Autoscaler is a tool that automatically adjusts the size of the Kubernetes cluster when one of the following conditions is true:

  • there are pods that failed to run in the cluster due to insufficient resources,
  • there are nodes in the cluster that have been underutilized for an extended period of time and their pods can be placed on other existing nodes.

FAQ/Documentation

Is available HERE.

Releases

We recommend using Cluster Autoscaler with the Kubernetes master version for which it was meant. The below combinations have been tested on GCP. We don't do cross version testing or compatibility testing in other environments. Some user reports indicate successful use of a newer version of Cluster Autoscaler with older clusters, however, there is always a chance that it won't work as expected.

Starting from Kubernetes 1.12, versioning scheme was changed to match Kubernetes minor releases exactly.

Kubernetes Version CA Version
1.13.X 1.13.X
1.12.X 1.12.X
1.11.X 1.3.X
1.10.X 1.2.X
1.9.X 1.1.X
1.8.X 1.0.X
1.7.X 0.6.X
1.6.X 0.5.X, 0.6.X*
1.5.X 0.4.X
1.4.X 0.3.X

*Cluster Autoscaler 0.5.X is the official version shipped with k8s 1.6. We've done some basic tests using k8s 1.6 / CA 0.6 and we're not aware of any problems with this setup. However, Cluster Autoscaler internally simulates Kubernetes' scheduler and using different versions of scheduler code can lead to subtle issues.

Notable changes

For CA 1.1.2 and later, please check release notes.

CA version 1.1.1:

  • Fixes around metrics in the multi-master configuration.
  • Fixes for unready nodes issues when quota is overrun.

CA version 1.1.0:

CA version 1.0.3:

  • Adds support for safe-to-evict annotation on pod. Pods with this annotation can be evicted even if they don't meet other requirements for it.
  • Fixes an issue when too many nodes with GPUs could be added during scale-up (https://github.com/kubernetes/kubernetes/issues/54959).

CA Version 1.0.2:

CA Version 1.0.1:

CA Version 1.0:

With this release we graduated Cluster Autoscaler to GA.

  • Support for 1000 nodes running 30 pods each. See: Scalability testing report
  • Support for 10 min graceful termination.
  • Improved eventing and monitoring.
  • Node allocatable support.
  • Removed Azure support. See: PR removing support with reasoning behind this decision
  • cluster-autoscaler.kubernetes.io/scale-down-disabled` annotation for marking nodes that should not be scaled down.
  • scale-down-delay-after-deleteandscale-down-delay-after-failureflags replacedscale-down-trial-interval`

CA Version 0.6:

CA Version 0.5.4:

  • Fixes problems with node drain when pods are ignoring SIGTERM.

CA Version 0.5.3:

CA Version 0.5.2:

CA Version 0.5.1:

CA Version 0.5:

  • CA continues to operate even if some nodes are unready and is able to scale-down them.
  • CA exports its status to kube-system/cluster-autoscaler-status config map.
  • CA respects PodDisruptionBudgets.
  • Azure support.
  • Alpha support for dynamic config changes.
  • Multiple expanders to decide which node group to scale up.

CA Version 0.4:

  • Bulk empty node deletions.
  • Better scale-up estimator based on binpacking.
  • Improved logging.

CA Version 0.3:

  • AWS support.
  • Performance improvements around scale down.

Deployment

Cluster Autoscaler is designed to run on Kubernetes master node. This is the default deployment strategy on GCP. It is possible to run a customized deployment of Cluster Autoscaler on worker nodes, but extra care needs to be taken to ensure that Cluster Autoscaler remains up and running. Users can put it into kube-system namespace (Cluster Autoscaler doesn't scale down node with non-mirrored kube-system pods running on them) and set a priorityClassName: system-cluster-critical property on your pod spec (to prevent your pod from being evicted).

Supported cloud providers: