Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hpa: Don't scale down if at least one metric was invalid #99514

Conversation

mikkeloscar
Copy link
Contributor

@mikkeloscar mikkeloscar commented Feb 26, 2021

What type of PR is this?

/kind bug
/kind regression

What this PR does / why we need it:

The official autoscaling docs: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details

States the following:

If multiple metrics are specified in a HorizontalPodAutoscaler, this calculation is done for each metric, and then the largest of the desired replica counts is chosen. If any of these metrics cannot be converted into a desired replica count (e.g. due to an error fetching the metrics from the metrics APIs) and a scale down is suggested by the metrics which can be fetched, scaling is skipped. This means that the HPA is still capable of scaling up if one or more metrics give a desiredReplicas greater than the current value.

This was working as stated up to and including Kubernetes v1.15. As part of v1.16 #74526 introduced a change which dropped the behavior of not scaling down if one metric was not correctly observed.

This PR re-introduces the old logic and reverts 2 tests covering the behavior but not testing the right thing (scaledown didn't happen because of the stabilization window, not because of the invalid metric). It also removes the two tests which were showing that scaledown does indeed happen despite at least one metric being unavailable/invalid.

Which issue(s) this PR fixes:

Fixes #99394

Special notes for your reviewer:

The logic was changed with the introduction of HPAScaleToZero and I imagine the idea was that an HPA should be able to scale to zero if there are no metrics. However, not only does it break the expectation from the API as documented in the docs, it also makes it much harder to reliably handle temporary unavailability of e.g. external metrics where you definitely don't want the HPA to start scaling down when it doesn't know what the missing metric would suggest.
Scaling to 0 (or down) should happen if the metrics indicates this, not if the metrics are simply unavailable.

Does this PR introduce a user-facing change?

Fix bug that would let the Horizontal Pod Autoscaler scale down despite at least one metric being unavailable/invalid

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. kind/bug Categorizes issue or PR as related to a bug. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. kind/regression Categorizes issue or PR as related to a regression from a prior release. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Feb 26, 2021
@k8s-ci-robot
Copy link
Contributor

@mikkeloscar: This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Feb 26, 2021
@k8s-ci-robot
Copy link
Contributor

Hi @mikkeloscar. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-priority Indicates a PR lacks a `priority/foo` label and requires one. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Feb 26, 2021
@k8s-ci-robot k8s-ci-robot added sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Feb 26, 2021
@josephburnett
Copy link
Member

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Mar 2, 2021
@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 2, 2021
@josephburnett
Copy link
Member

/test pull-kubernetes-dependencies

@josephburnett
Copy link
Member

I agree completely we should keep this behavior. If the metric is invalid, we should prefer not to do anything.

@josephburnett
Copy link
Member

These test failures so far seem unrelated to the PR. E.g. pull-kubernetes-verify:

Run ./hack/update-internal-modules.sh
+++ exit code: 1
+++ error: 1
�[0;31mFAILED�[0m   verify-internal-modules.sh	17s

Signed-off-by: Mikkel Oscar Lyderik Larsen <mikkel.larsen@zalando.de>
@mikkeloscar mikkeloscar force-pushed the hpa-no-scale-down-some-broken-metrics branch from 8636651 to fef092b Compare March 3, 2021 06:53
@mikkeloscar
Copy link
Contributor Author

@josephburnett I rebased on master and now all tests are passing!

@josephburnett
Copy link
Member

Ah, that's why the errors. Thanks

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Mar 4, 2021
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: josephburnett, mikkeloscar

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. kind/regression Categorizes issue or PR as related to a regression from a prior release. lgtm "Looks good to me", indicates that a PR is ready to be merged. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

hpa: Scales down despite some metrics being unavailable (since v1.16, #74526)
3 participants