Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fresh install generates "exactly one ScaledObject should match label" #3597

Closed
IliaGe opened this issue Aug 25, 2022 · 8 comments
Closed

Fresh install generates "exactly one ScaledObject should match label" #3597

IliaGe opened this issue Aug 25, 2022 · 8 comments
Labels
bug Something isn't working stale All issues that are marked as stale due to inactivity

Comments

@IliaGe
Copy link

IliaGe commented Aug 25, 2022

Report

A fresh install of Chart version: 2.8.1 (appVersion: 2.8.0) on EKS 1.22 (v1.22.11-eks-18ef993) enters a loop of errors in the keda-operator-metrics-apiserver

I0825 11:33:51.652365       1 main.go:169] keda_metrics_adapter "msg"="KEDA Version: 2.8.0"
I0825 11:33:51.652417       1 main.go:170] keda_metrics_adapter "msg"="KEDA Commit: a4a118201214e7abdeebad72cbe337b9856f8191"
I0825 11:33:51.652425       1 main.go:171] keda_metrics_adapter "msg"="Go Version: go1.17.13"
I0825 11:33:51.652434       1 main.go:172] keda_metrics_adapter "msg"="Go OS/Arch: linux/amd64"
I0825 11:33:52.703871       1 request.go:601] Waited for 1.012564626s due to client-side throttling, not priority and fairness, request: GET:https://172.20.0.1:443/apis/vpcresources.k8s.aws/v1beta1?timeout=32s
I0825 11:34:02.058382       1 logr.go:261] keda_metrics_adapter/controller-runtime/metrics "msg"="Metrics server is starting to listen" "addr"=":8080"
I0825 11:34:02.059071       1 provider.go:65] keda_metrics_adapter/provider "msg"="starting"
2022/08/25 11:34:02 Starting metrics server at :9022
I0825 11:34:02.059113       1 main.go:236] keda_metrics_adapter "msg"="starting adapter..."
I0825 11:34:02.059396       1 internal.go:362] keda_metrics_adapter "msg"="Starting server" "addr"={"IP":"::","Port":8080,"Zone":""} "kind"="metrics" "path"="/metrics"
I0825 11:34:02.059429       1 controller.go:185] keda_metrics_adapter "msg"="Starting EventSource" "controller"="scaledobject" "controllerGroup"="keda.sh" "controllerKind"="ScaledObject" "source"="kind source: *v1alpha1.ScaledObject"
I0825 11:34:02.059481       1 controller.go:185] keda_metrics_adapter "msg"="Starting EventSource" "controller"="scaledobject" "controllerGroup"="keda.sh" "controllerKind"="ScaledObject" "source"="kind source: *v1alpha1.ScaledObject"
I0825 11:34:02.059499       1 controller.go:193] keda_metrics_adapter "msg"="Starting Controller" "controller"="scaledobject" "controllerGroup"="keda.sh" "controllerKind"="ScaledObject"
I0825 11:34:02.164927       1 controller.go:227] keda_metrics_adapter "msg"="Starting workers" "controller"="scaledobject" "controllerGroup"="keda.sh" "controllerKind"="ScaledObject" "worker count"=1
I0825 11:34:02.609885       1 serving.go:342] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
I0825 11:34:03.224722       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0825 11:34:03.224741       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0825 11:34:03.224745       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0825 11:34:03.224760       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0825 11:34:03.224764       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0825 11:34:03.224752       1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
I0825 11:34:03.225099       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::apiserver.local.config/certificates/apiserver.crt::apiserver.local.config/certificates/apiserver.key"
I0825 11:34:03.225152       1 secure_serving.go:210] Serving securely on [::]:6443
I0825 11:34:03.225192       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0825 11:34:03.324932       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0825 11:34:03.327237       1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
I0825 11:34:03.327377       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
E0825 11:34:10.693688       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"exactly one ScaledObject should match label "}: exactly one ScaledObject should match label
E0825 11:34:16.254990       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"exactly one ScaledObject should match label "}: exactly one ScaledObject should match label
E0825 11:34:16.269297       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"exactly one ScaledObject should match label "}: exactly one ScaledObject should match label
E0825 11:34:16.295631       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"exactly one ScaledObject should match label "}: exactly one ScaledObject should match label 

Expected Behavior

Clean & silent run :)

Actual Behavior

Bombarding with errors logs

Steps to Reproduce the Problem

  1. helm install

Logs from KEDA operator

I0825 11:33:52.847087       1 request.go:601] Waited for 1.038756727s due to client-side throttling, not priority and fairness, request: GET:https://172.20.0.1:443/apis/monitoring.coreos.com/v1?timeout=32s
{"level":"info","ts":"2022-08-25T11:34:02Z","logger":"controller-runtime.metrics","msg":"Metrics server is starting to listen","addr":":8080"}
{"level":"info","ts":"2022-08-25T11:34:02Z","logger":"setup","msg":"Running on Kubernetes 1.22+","version":"v1.22.11-eks-18ef993"}
{"level":"info","ts":"2022-08-25T11:34:02Z","logger":"setup","msg":"Starting manager"}
{"level":"info","ts":"2022-08-25T11:34:02Z","logger":"setup","msg":"KEDA Version: 2.8.0"}
{"level":"info","ts":"2022-08-25T11:34:02Z","logger":"setup","msg":"Git Commit: a4a118201214e7abdeebad72cbe337b9856f8191"}
{"level":"info","ts":"2022-08-25T11:34:02Z","logger":"setup","msg":"Go Version: go1.17.13"}
{"level":"info","ts":"2022-08-25T11:34:02Z","logger":"setup","msg":"Go OS/Arch: linux/amd64"}
{"level":"info","ts":"2022-08-25T11:34:02Z","msg":"Starting server","path":"/metrics","kind":"metrics","addr":"[::]:8080"}
{"level":"info","ts":"2022-08-25T11:34:02Z","msg":"Starting server","kind":"health probe","addr":"[::]:8081"}
I0825 11:34:02.204376       1 leaderelection.go:248] attempting to acquire leader lease keda/operator.keda.sh...
I0825 11:34:02.221339       1 leaderelection.go:258] successfully acquired lease keda/operator.keda.sh
{"level":"info","ts":"2022-08-25T11:34:02Z","msg":"Starting EventSource","controller":"clustertriggerauthentication","controllerGroup":"keda.sh","controllerKind":"ClusterTriggerAuthentication","source":"kind source: *v1alpha1.ClusterTriggerAuthentication"}
{"level":"info","ts":"2022-08-25T11:34:02Z","msg":"Starting Controller","controller":"clustertriggerauthentication","controllerGroup":"keda.sh","controllerKind":"ClusterTriggerAuthentication"}
{"level":"info","ts":"2022-08-25T11:34:02Z","msg":"Starting EventSource","controller":"scaledjob","controllerGroup":"keda.sh","controllerKind":"ScaledJob","source":"kind source: *v1alpha1.ScaledJob"}
{"level":"info","ts":"2022-08-25T11:34:02Z","msg":"Starting Controller","controller":"scaledjob","controllerGroup":"keda.sh","controllerKind":"ScaledJob"}
{"level":"info","ts":"2022-08-25T11:34:02Z","msg":"Starting EventSource","controller":"scaledobject","controllerGroup":"keda.sh","controllerKind":"ScaledObject","source":"kind source: *v1alpha1.ScaledObject"}
{"level":"info","ts":"2022-08-25T11:34:02Z","msg":"Starting EventSource","controller":"scaledobject","controllerGroup":"keda.sh","controllerKind":"ScaledObject","source":"kind source: *v2beta2.HorizontalPodAutoscaler"}
{"level":"info","ts":"2022-08-25T11:34:02Z","msg":"Starting Controller","controller":"scaledobject","controllerGroup":"keda.sh","controllerKind":"ScaledObject"}
{"level":"info","ts":"2022-08-25T11:34:02Z","msg":"Starting EventSource","controller":"triggerauthentication","controllerGroup":"keda.sh","controllerKind":"TriggerAuthentication","source":"kind source: *v1alpha1.TriggerAuthentication"}
{"level":"info","ts":"2022-08-25T11:34:02Z","msg":"Starting Controller","controller":"triggerauthentication","controllerGroup":"keda.sh","controllerKind":"TriggerAuthentication"}
{"level":"info","ts":"2022-08-25T11:34:02Z","msg":"Starting workers","controller":"scaledjob","controllerGroup":"keda.sh","controllerKind":"ScaledJob","worker count":1}
{"level":"info","ts":"2022-08-25T11:34:02Z","msg":"Starting workers","controller":"triggerauthentication","controllerGroup":"keda.sh","controllerKind":"TriggerAuthentication","worker count":1}
{"level":"info","ts":"2022-08-25T11:34:02Z","msg":"Starting workers","controller":"clustertriggerauthentication","controllerGroup":"keda.sh","controllerKind":"ClusterTriggerAuthentication","worker count":1}
{"level":"info","ts":"2022-08-25T11:34:02Z","msg":"Starting workers","controller":"scaledobject","controllerGroup":"keda.sh","controllerKind":"ScaledObject","worker count":5}

KEDA Version

2.8.0

Kubernetes Version

1.22

Platform

Amazon Web Services

Scaler Details

No response

Anything else?

No response

@IliaGe IliaGe added the bug Something isn't working label Aug 25, 2022
@JorTurFer
Copy link
Member

How many ScaledObjects do you have? That behavior is weird

@chrism417
Copy link

I'm having this same issue with ZERO ScaledObjects on version 2.8.1.

@JorTurFer
Copy link
Member

This could be because you have something in the cluster that is querying external.metrics.k8s.io endpoint for any reason (eg. AFAIK, GKE Backups uses this api for something)

You can try requesting a metric that doesn't exist with kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1/namespaces/default/fake-metric", if my suspicious are right, you will see the same error in log on every request you do

@JorTurFer
Copy link
Member

Could you try and share how it goes?

@chrism417
Copy link

That's correct the log message shows up.

@JorTurFer
Copy link
Member

@IliaGe , by chance, have you been able to test it?

@stale
Copy link

stale bot commented Dec 10, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale All issues that are marked as stale due to inactivity label Dec 10, 2022
@stale
Copy link

stale bot commented Dec 17, 2022

This issue has been automatically closed due to inactivity.

@stale stale bot closed this as completed Dec 17, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working stale All issues that are marked as stale due to inactivity
Projects
Status: Done
Development

No branches or pull requests

3 participants