-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Description
problem
We attempted to scale the cluster from 8 to 9 worker nodes using the Scale Cluster option in the CloudStack UI.
The new worker VM was created successfully and is in Running state in CloudStack. However, Kubernetes does not register the new node.
After the scale operation:
CloudStack shows 9 worker nodes
kubectl get nodes continues to show only 8 worker nodes
UI status stucks on scaling.
Error observed in CloudStack Management Server logs:
ERROR ... Unexpected exception while executing ScaleKubernetesClusterCmd
at KubernetesClusterResourceModifierActionWorker.removeSshFirewallRule
at KubernetesClusterScaleWorker.scaleKubernetesClusterIsolatedNetworkRules
versions
Environment details:
CloudStack version: 4.19.0.1
Cluster type: CloudStack Kubernetes Service (CKS)
Initial cluster size:
1 Control Plane
8 Worker Nodes (working fine)
Scale target: 9 Worker Nodes
Global setting cloud.kubernetes.cluster.max.size was increased from 10 to 50 prior to scaling.
The steps to reproduce the bug
Steps to Reproduce
Deploy a Kubernetes cluster using CloudStack Kubernetes Service (CKS) on CloudStack 4.19.0.1 with the following configuration:
1 Control Plane node
8 Worker nodes
From the CloudStack UI, navigate to:
Kubernetes → Clusters → → Scale Cluster
Scale the cluster by increasing the worker node count from 8 to 9 and submit the scale operation.
Observe the following behavior:
The new worker VM is created successfully and shows Running state in CloudStack.
The scale task stucks on scaling.
kubectl get nodes still showing node count as 8
What to do about it?
is there any workaround to resolve this without restarting the cluster