This repository has been archived by the owner on Jan 22, 2021. It is now read-only.
HPA not scaling down when target is met #46
Labels
duplicate
This issue or pull request already exists
Describe the bug
The HPA isn't removing replicas once the target value has been met.
To Reproduce
Steps to reproduce
Follow the walk through from https://github.com/Azure/azure-k8s-metrics-adapter/blob/master/samples/request-per-second/readme.md
Run kubectl get hpa {hpaname} -w
to watch the state change
Once HEY has finished processing and the cool down period has passed run
kubectl get hpa {hpaname} -w
you will notice the target drops to 0/{targetvalue} but the number of replicas doesn't reduce.
Expected behavior
The HPA should scale down to the min replicas
Kubernetes version (
kubectl version
):Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:53:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Logs (
kubectl logs <metric adapter pod id>
)I1031 21:02:03.132657 1 aiapiclient.go:52] request to: https://api.applicationinsights.io/v1/apps/{ai-appid}/metrics/customMetrics/queuelengthmonitoredqueue?interval=PT30S&timespan=PT5M
I1031 21:02:03.285583 1 provider.go:65] Received request for custom metric: groupresource: pods, namespace: default, metric name: performanceCounters-requestsPerSecond, selectors: app=rps-sample
I1031 21:02:03.285628 1 aiapiclient.go:52] request to: https://api.applicationinsights.io/v1/apps/{ai-appid}/metrics/performanceCounters/requestsPerSecond?interval=PT30S&timespan=PT5M
I1031 21:03:03.167579 1 provider.go:65] Received request for custom metric: groupresource: pods, namespace: default, metric name: customMetrics-queuelengthmonitoredqueue, selectors: app=queue-sample
I1031 21:03:03.167676 1 aiapiclient.go:52] request to: https://api.applicationinsights.io/v1/apps/{ai-appid}/metrics/customMetrics/queuelengthmonitoredqueue?interval=PT30S&timespan=PT5M
I1031 21:03:03.319472 1 provider.go:65] Received request for custom metric: groupresource: pods, namespace: default, metric name: performanceCounters-requestsPerSecond, selectors: app=rps-sample
I1031 21:03:03.319516 1 aiapiclient.go:52] request to: https://api.applicationinsights.io/v1/apps/{ai-appid}/metrics/performanceCounters/requestsPerSecond?interval=PT30S&timespan=PT5M
I1031 21:03:33.171531 1 provider.go:65] Received request for custom metric: groupresource: pods, namespace: default, metric name: customMetrics-queuelengthmonitoredqueue, selectors: app=queue-sample
I1031 21:03:33.171576 1 aiapiclient.go:52] request to: https://api.applicationinsights.io/v1/apps/{ai-appid}/metrics/customMetrics/queuelengthmonitoredqueue?interval=PT30S&timespan=PT5M
I1031 21:03:33.280357 1 provider.go:65] Received request for custom metric: groupresource: pods, namespace: default, metric name: performanceCounters-requestsPerSecond, selectors: app=rps-sample
I1031 21:03:33.280397 1 aiapiclient.go:52] request to: https://api.applicationinsights.io/v1/apps/{ai-appid}/metrics/performanceCounters/requestsPerSecond?interval=PT30S&timespan=PT5M
I1031 21:04:03.160889 1 provider.go:65] Received request for custom metric: groupresource: pods, namespace: default, metric name: performanceCounters-requestsPerSecond, selectors: app=rps-sample
I1031 21:04:03.160929 1 aiapiclient.go:52] request to: https://api.applicationinsights.io/v1/apps/{ai-appid}/metrics/performanceCounters/requestsPerSecond?interval=PT30S&timespan=PT5M
I1031 21:04:03.264824 1 provider.go:65] Received request for custom metric: groupresource: pods, namespace: default, metric name: customMetrics-queuelengthmonitoredqueue, selectors: app=queue-sample
I1031 21:04:03.264864 1 aiapiclient.go:52] request to: https://api.applicationinsights.io/v1/apps/{ai-appid}/metrics/customMetrics/queuelengthmonitoredqueue?interval=PT30S&timespan=PT5M
The text was updated successfully, but these errors were encountered: