You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are using KEDA 2.12.1 on an Azure Kubernetes Service version 1.27.3. When keda-operator got an issue, down and restarted, it flag the previously un-paused ScaledObjects as paused and scaled it to 0 pods instead of keeping it running with 1 pod (set in minReplicasCount)
Expected Behavior
It should keep the pods running when restarted.
Actual Behavior
It scaled the Deployments to 0 pods and kill all running pods of those deployments.
Hi,
I've tested and I can't reproduce the issue :(
I don't think that this is related with KEDA because KEDA is stateless. It means that after a restart, KEDA pulls the ScaledObject from the API Server and works based on them.
Based on these lines:
I0206 20:06:35.635622 1 leaderelection.go:285] failed to renew lease keda/operator.keda.sh: timed out waiting for the condition
2024-02-06T20:06:35ZERRORsetupproblem running manager{"error": "leader election lost"}
I think that something has happened in the control plane side and maybe there have been a rollback in ETCD, producing that the annotation has been placed again (permanently of just for a while). Have you checked if the annotation is still there?
@tomkerkhove , Do you know if this is something the @tr1et can ask via support ticket?
Thanks @JorTurFer , let me try to reproduce the issue on our dev environment.
I don't have the exact annotations info after the restart right now (we deleted and re-created the ScaledObject after finding the issue to make sure) but I remember that the "paused" annotations are not there.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.
stalebot
added
the
stale
All issues that are marked as stale due to inactivity
label
Apr 19, 2024
Report
We are using KEDA 2.12.1 on an Azure Kubernetes Service version 1.27.3. When
keda-operator
got an issue, down and restarted, it flag the previously un-paused ScaledObjects as paused and scaled it to 0 pods instead of keeping it running with 1 pod (set inminReplicasCount
)Expected Behavior
It should keep the pods running when restarted.
Actual Behavior
It scaled the Deployments to 0 pods and kill all running pods of those deployments.
Steps to Reproduce the Problem
kubectl annotate scaledobject "$deployment" autoscaling.keda.sh/paused-replicas="0" -n "$NAMESPACE" --overwrite
kubectl annotate scaledobject "$deployment" autoscaling.keda.sh/paused-replicas- -n "$NAMESPACE" --overwrite
keda-operator
Logs from KEDA operator
Service names are masked.
Operator down logs:
Operator restarted logs:
KEDA Version
2.12.1
Kubernetes Version
1.27
Platform
Microsoft Azure
Scaler Details
Azure Service Bus, CPU, Memory
Anything else?
No response
The text was updated successfully, but these errors were encountered: