-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Keda operator restarting after failing to renew leader election #4212
Comments
Hello All we can do from KEDA side is to bring the option to configure the leader election values, maybe you could try extending them to reduce the chances of renewal failures. |
@JorTurFer What will be standard recommended values for below config if the scaledobject are close to 100
|
Hey, |
okey, I have just noticed that there aren't default values 🤦 |
K8s cluster is GKE |
Have you tried modifying the default values? Does it work now? |
@JorTurFer I am yet to apply these changes in next iteration, will confirm you if this works |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed due to inactivity. |
Report
I0207 13:31:36.792715 1 leaderelection.go:283] failed to renew lease operator/operator.keda.sh: timed out waiting for the condition {"level":"info","ts":"2023-02-07T13:31:36Z","msg":"Stopping and waiting for non leader election runnables"} {"level":"info","ts":"2023-02-07T13:31:36Z","msg":"Stopping and waiting for leader election runnables"} {"level":"info","ts":"2023-02-07T13:31:36Z","msg":"Stopping and waiting for caches"} {"level":"info","ts":"2023-02-07T13:31:36Z","msg":"Stopping and waiting for webhooks"} {"level":"info","ts":"2023-02-07T13:31:36Z","msg":"Wait completed, proceeding to shutdown the manager"} {"level":"info","ts":"2023-02-07T13:31:36Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"triggerauthentication","controllerGroup":"keda.sh","controllerKind":"TriggerAuthentication"} {"level":"info","ts":"2023-02-07T13:31:36Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"scaledjob","controllerGroup":"keda.sh","controllerKind":"ScaledJob"} {"level":"info","ts":"2023-02-07T13:31:36Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"clustertriggerauthentication","controllerGroup":"keda.sh","controllerKind":"ClusterTriggerAuthentication"} {"level":"info","ts":"2023-02-07T13:31:36Z","msg":"All workers finished","controller":"clustertriggerauthentication","controllerGroup":"keda.sh","controllerKind":"ClusterTriggerAuthentication"}
Expected Behavior
Keda operator container should not restart
Actual Behavior
Keda operator containers gets restarted after every 15-20hrs
Steps to Reproduce the Problem
Logs from KEDA operator
KEDA Version
2.9.0
Kubernetes Version
1.25
Platform
None
Scaler Details
Kafka Scaler
Anything else?
No response
The text was updated successfully, but these errors were encountered: