You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What k8s version are you using (kubectl version)?:
v1.26
What behaviour did you expect to see?
The nodes that have timed out during the scale-up process should be specifically targeted for removal, rather than being removed randomly.
What happened instead?:
The nodes that have timed out during the scale-up process are removed randomly.
How to reproduce it (as minimally and precisely as possible):
Initiate a scale-up request.
Cloud instances have been successfully created.
The node was successfully registered, but for some reason, it remained in the 'notReady' state (due to issues such as CNI/container runtime failure, etc).
After the MaxNodeProvisionTime, the cloud instances would be removed during the fixNodeGroupSize reconciliation. However, the DecreaseTargetSize function will randomly delete cloud instances, instead of removing the newly created nodes.
Anything else we need to know?:
The text was updated successfully, but these errors were encountered:
Which component are you using?:
cluster-autoscaler
What version of the component are you using?:
Component version: cluster-autoscaler-release-1.30
What k8s version are you using (
kubectl version
)?:v1.26
What behaviour did you expect to see?
The nodes that have timed out during the scale-up process should be specifically targeted for removal, rather than being removed randomly.
What happened instead?:
The nodes that have timed out during the scale-up process are removed randomly.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
The text was updated successfully, but these errors were encountered: