Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Human-Readable Print Of HPA with Object-Metric TargetAverageValue is Wrong #89315

Closed
zach-robinson opened this issue Mar 20, 2020 · 8 comments
Closed
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling.

Comments

@zach-robinson
Copy link

What happened:
Created an HPA against an Object Metric with a target AverageValue. HPA works as expected, but when printing in human readable format the current and target averageValue do not get printed.

[root@master0 demo]# kubectl get hpa am-server01-applicationmgmt-consumer-2
NAME                                     REFERENCE                                         TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
am-server01-applicationmgmt-consumer-2   Deployment/am-server01-applicationmgmt-consumer   0/0       1         2         2          17s

For some reason the HPA controller seems to be inserting dummy values for the current value and targetValue (non-averaged).

    autoscaling.alpha.kubernetes.io/conditions: '[{"type":"AbleToScale","status":"True","lastTransitionTime":"2020-03-20T20:16:13Z","reason":"ReadyForNewScale","message":"recommended
      size matches current size"},{"type":"ScalingActive","status":"True","lastTransitionTime":"2020-03-20T20:16:13Z","reason":"ValidMetricFound","message":"the
      HPA was able to successfully calculate a replica count from external metric
      aggregate.percentile-99.pt2m.lag(\u0026LabelSelector{MatchLabels:map[string]string{consumerGroup:
      gomgmt-resource-consumer,topic: com.ibm.perfmgmt.applicationmgmt.fg.work,},MatchExpressions:[],})"},{"type":"ScalingLimited","status":"False","lastTransitionTime":"2020-03-20T20:16:13Z","reason":"DesiredWithinRange","message":"the
      desired count is within the acceptable range"}]'
    autoscaling.alpha.kubernetes.io/current-metrics: '[{"type":"Object","object":{"target":{"kind":"","name":""},"metricName":"aggregate.percentile-99.pt2m.lag","currentValue":"0","selector":{"matchLabels":{"consumerGroup":"gomgmt-resource-consumer","topic":"com.ibm.perfmgmt.applicationmgmt.fg.work"}},"averageValue":"658455m"}}]'
    autoscaling.alpha.kubernetes.io/metrics: '[{"type":"Object","object":{"target":{"kind":"Deployment","name":"am-server01-applicationmgmt-consumer","apiVersion":"apps/v1"},"metricName":"aggregate.percentile-99.pt2m.lag","targetValue":"0","selector":{"matchLabels":{"consumerGroup":"gomgmt-resource-consumer","topic":"com.ibm.perfmgmt.applicationmgmt.fg.work"}},"averageValue":"1k"}}]'
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"autoscaling/v2beta2","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"am-server01-applicationmgmt-consumer-2","namespace":"kube-system"},"spec":{"maxReplicas":2,"metrics":[{"object":{"describedObject":{"apiVersion":"apps/v1","kind":"Deployment","name":"am-server01-applicationmgmt-consumer"},"metric":{"name":"aggregate.percentile-99.pt2m.lag","selector":{"matchLabels":{"consumerGroup":"gomgmt-resource-consumer","topic":"com.ibm.perfmgmt.applicationmgmt.fg.work"}}},"target":{"averageValue":"1k","type":"AverageValue"}},"type":"Object"}],"minReplicas":1,"scaleTargetRef":{"apiVersion":"apps/v1","kind":"Deployment","name":"am-server01-applicationmgmt-consumer"}}}

The controller is correctly using the averageValue to calculate the number of replicas, as you can see in the describe:

Metrics:                                                                                                         ( current / target )
  "aggregate.percentile-99.pt2m.lag" on Deployment/am-server01-applicationmgmt-consumer (target average value):  207 / 1k

But it seems to be causing issues in the human-readable printout.

What you expected to happen:
When doing kubectl get hpa average current metric value and target for Object-Metric should be printed under the TARGETS header.

How to reproduce it (as minimally and precisely as possible):
Create an HPA using an Object-Metric with an averageValue target.

Anything else we need to know?:
#72824
#87733

Environment:

  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0+724e12f93f", GitCommit:"d5465d715197862b15c8cf05e864d7c3cfea6917", GitTreeState:"clean", BuildDate:"2019-10-10T22:03:41Z", GoVersion:"go1.12.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.6+2e5ed54", GitCommit:"2e5ed54", GitTreeState:"clean", BuildDate:"2019-10-10T22:04:13Z", GoVersion:"go1.12.8", Compiler:"gc", Platform:"linux/amd64"}
  • OS (e.g: cat /etc/os-release): Red Hat Enterprise Linux CoreOS 42.80.20191010.0 (Ootpa)
  • Kernel (e.g. uname -a): 3.10.0-1062.4.1.el7.x86_64
@zach-robinson zach-robinson added the kind/bug Categorizes issue or PR as related to a bug. label Mar 20, 2020
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Mar 20, 2020
@zach-robinson
Copy link
Author

/sig autoscaling

@k8s-ci-robot k8s-ci-robot added sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Mar 20, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 18, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 18, 2020
@zach-robinson
Copy link
Author

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jul 20, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 18, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 17, 2020
@arjunrn
Copy link
Contributor

arjunrn commented Nov 17, 2020

/remove-lifecycle rotten
/assign @arjunrn

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Nov 17, 2020
@arjunrn
Copy link
Contributor

arjunrn commented Nov 20, 2020

@zach-robinson It looks like this issue was fixed with this PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling.
Projects
None yet
Development

No branches or pull requests

4 participants