Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make the number of ramp steps configurable #374

Merged
merged 1 commit into from
Oct 25, 2021

Conversation

jonathanbeber
Copy link
Contributor

In #371 we introduced steps to make the scaling up possible even when
the HPA forces a 10% change. The problem is that 10% might not be
sufficient for some specific scaling scenarios.

For example, a an application targeting 12 pods and using a
ScalingSchedule with the value of 10000 to achieve that, will require a
target of 833. With 10 ramp steps the 90% bucket will return a metric of
9000 and the HPA calculates (9000/833) 10.8 pods, rounding to 11 pods.
Once the metric reaches the time to return 100% it will won't be
effective, since the change of the current number of pods (11) and the
desired one (12) is less than 10%.

This commit does not try to tackle this problem completely, since the
10% rule is not fixed, might change among different clusters and is also
dependent on the value given to each ScalingSchedule. Therefore, this
commit makes the number of ramp steps configurable via the
--scaling-schedule-ramp-steps config flag, defaulting to 10.

In #371 we introduced steps to make the scaling up possible even when
the HPA forces a 10% change. The problem is that 10% might not be
sufficient for some specific scaling scenarios.

For example, a an application targeting 12 pods and using a
ScalingSchedule with the value of 10000 to achieve that, will require a
target of 833. With 10 ramp steps the 90% bucket will return a metric of
9000 and the HPA calculates (9000/833) 10.8 pods, rounding to 11 pods.
Once the metric reaches the time to return 100% it will won't be
effective, since the change of the current number of pods (11) and the
desired one (12) is less than 10%.

This commit does not try to tackle this problem completely, since the
10% rule is not fixed, might change among different clusters and is also
dependent on the value given to each ScalingSchedule. Therefore, this
commit makes the number of ramp steps configurable via the
`--scaling-schedule-ramp-steps` config flag, defaulting to 10.

Signed-off-by: Jonathan Juares Beber <jonathanbeber@gmail.com>
@mikkeloscar
Copy link
Contributor

👍

1 similar comment
@jonathanbeber
Copy link
Contributor Author

👍

@jonathanbeber jonathanbeber merged commit 1c9038b into master Oct 25, 2021
@jonathanbeber jonathanbeber deleted the configurable-buckets branch October 25, 2021 08:21
jonathanbeber added a commit to zalando-incubator/kubernetes-on-aws that referenced this pull request Oct 25, 2021
This commit updates the kube-metrics-adapter to include the changes
introduced in [kube-metrics-adapter#374][kube-metrics-adapter_374]. It
sets the default values to 5 steps, allowing 20% chunks of scaling. In
simulations, any deployment bigger than 5 pods should face the issue of
the change being smaller than 10%.

[kube-metrics-adapter_374]: zalando-incubator/kube-metrics-adapter#374
@tkrop
Copy link
Member

tkrop commented Oct 26, 2021

👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants