Skip to content
This repository has been archived by the owner on Jan 22, 2021. It is now read-only.

HPA not scaling down when target is met #46

Closed
developerxnz opened this issue Oct 31, 2018 · 2 comments
Closed

HPA not scaling down when target is met #46

developerxnz opened this issue Oct 31, 2018 · 2 comments
Labels
duplicate This issue or pull request already exists

Comments

@developerxnz
Copy link

Describe the bug
The HPA isn't removing replicas once the target value has been met.

To Reproduce
Steps to reproduce
Follow the walk through from https://github.com/Azure/azure-k8s-metrics-adapter/blob/master/samples/request-per-second/readme.md

Run kubectl get hpa {hpaname} -w
to watch the state change

Once HEY has finished processing and the cool down period has passed run
kubectl get hpa {hpaname} -w
you will notice the target drops to 0/{targetvalue} but the number of replicas doesn't reduce.

Expected behavior
The HPA should scale down to the min replicas

Kubernetes version (kubectl version):

  • [* ] Running on AKS
    Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"windows/amd64"}
    Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:53:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

Logs (kubectl logs <metric adapter pod id>)
I1031 21:02:03.132657 1 aiapiclient.go:52] request to: https://api.applicationinsights.io/v1/apps/{ai-appid}/metrics/customMetrics/queuelengthmonitoredqueue?interval=PT30S&amp;timespan=PT5M
I1031 21:02:03.285583 1 provider.go:65] Received request for custom metric: groupresource: pods, namespace: default, metric name: performanceCounters-requestsPerSecond, selectors: app=rps-sample
I1031 21:02:03.285628 1 aiapiclient.go:52] request to: https://api.applicationinsights.io/v1/apps/{ai-appid}/metrics/performanceCounters/requestsPerSecond?interval=PT30S&amp;timespan=PT5M
I1031 21:03:03.167579 1 provider.go:65] Received request for custom metric: groupresource: pods, namespace: default, metric name: customMetrics-queuelengthmonitoredqueue, selectors: app=queue-sample
I1031 21:03:03.167676 1 aiapiclient.go:52] request to: https://api.applicationinsights.io/v1/apps/{ai-appid}/metrics/customMetrics/queuelengthmonitoredqueue?interval=PT30S&amp;timespan=PT5M
I1031 21:03:03.319472 1 provider.go:65] Received request for custom metric: groupresource: pods, namespace: default, metric name: performanceCounters-requestsPerSecond, selectors: app=rps-sample
I1031 21:03:03.319516 1 aiapiclient.go:52] request to: https://api.applicationinsights.io/v1/apps/{ai-appid}/metrics/performanceCounters/requestsPerSecond?interval=PT30S&amp;timespan=PT5M
I1031 21:03:33.171531 1 provider.go:65] Received request for custom metric: groupresource: pods, namespace: default, metric name: customMetrics-queuelengthmonitoredqueue, selectors: app=queue-sample
I1031 21:03:33.171576 1 aiapiclient.go:52] request to: https://api.applicationinsights.io/v1/apps/{ai-appid}/metrics/customMetrics/queuelengthmonitoredqueue?interval=PT30S&amp;timespan=PT5M
I1031 21:03:33.280357 1 provider.go:65] Received request for custom metric: groupresource: pods, namespace: default, metric name: performanceCounters-requestsPerSecond, selectors: app=rps-sample
I1031 21:03:33.280397 1 aiapiclient.go:52] request to: https://api.applicationinsights.io/v1/apps/{ai-appid}/metrics/performanceCounters/requestsPerSecond?interval=PT30S&amp;timespan=PT5M
I1031 21:04:03.160889 1 provider.go:65] Received request for custom metric: groupresource: pods, namespace: default, metric name: performanceCounters-requestsPerSecond, selectors: app=rps-sample
I1031 21:04:03.160929 1 aiapiclient.go:52] request to: https://api.applicationinsights.io/v1/apps/{ai-appid}/metrics/performanceCounters/requestsPerSecond?interval=PT30S&amp;timespan=PT5M
I1031 21:04:03.264824 1 provider.go:65] Received request for custom metric: groupresource: pods, namespace: default, metric name: customMetrics-queuelengthmonitoredqueue, selectors: app=queue-sample
I1031 21:04:03.264864 1 aiapiclient.go:52] request to: https://api.applicationinsights.io/v1/apps/{ai-appid}/metrics/customMetrics/queuelengthmonitoredqueue?interval=PT30S&amp;timespan=PT5M

@developerxnz
Copy link
Author

developerxnz commented Nov 1, 2018

I have tested running the following HPA and this has scaled back down as expected once the target was reached.

apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: rps-sample
spec:
  scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: Deployment
    name: rps-sample
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      targetAverageUtilization: 20

@jsturtevant jsturtevant added the duplicate This issue or pull request already exists label Nov 1, 2018
@jsturtevant
Copy link
Collaborator

Thanks for the report. This is a duplicate of #34 and I have been able to get the appropriate behavior in this PR #36.

I am going to close this issue and track from #34

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
duplicate This issue or pull request already exists
Projects
None yet
Development

No branches or pull requests

2 participants