You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Which component are you using?:
cluster-autoscaler
Which autoscaling component hosted in this repository (cluster-autoscaler, vertical-pod-autoscaler, addon-resizer, helm charts) is the bug in?
cluster-autoscaler
What version of the component are you using?:
appVersion: 1.21.0
chart version 9.10.7
What k8s version are you using (kubectl version)?:
1.21
What environment is this in?:
gke
What did you expect to happen?:
the held should be by cluster-autoscaler , and not by gke cluster autoscaler.
What happened instead?:
lock is held by gke-******
How to reproduce it (as minimally and precisely as possible):
restart cluster autoscaler pods.
The text was updated successfully, but these errors were encountered:
Right, when GKE autoscaling is disabled, it should be possible to run OSS Cluster Autoscaler instead. One workaround would be to use --namespace flag and run CA in any namespace other than kube-system, so the leader election resource lock is not conflicting with GKE CA.
thank you, seems to solve the issue.
i also had to update the role and the rolebinding to the same namespace specified in the "--namespace" so the serviceaccount will have permission to get and update the configmap
thank you, seems to solve the issue. i also had to update the role and the rolebinding to the same namespace specified in the "--namespace" so the serviceaccount will have permission to get and update the configmap
I am running into the same issue, do you have steps on the updates to the role & role bindings?
Which component are you using?:
cluster-autoscaler
Which autoscaling component hosted in this repository (cluster-autoscaler, vertical-pod-autoscaler, addon-resizer, helm charts) is the bug in?
cluster-autoscaler
What version of the component are you using?:
appVersion: 1.21.0
chart version 9.10.7
What k8s version are you using (
kubectl version
)?:1.21
What environment is this in?:
gke
What did you expect to happen?:
the held should be by cluster-autoscaler , and not by gke cluster autoscaler.
What happened instead?:
lock is held by gke-******
How to reproduce it (as minimally and precisely as possible):
restart cluster autoscaler pods.
The text was updated successfully, but these errors were encountered: