-
Notifications
You must be signed in to change notification settings - Fork 248
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ability to scale down individual node(s) for RKE2-provisioned clusters #4446
Comments
Per @vincent99, this should be relatively easy. It should be a matter of selecting each of the nodes and set the annotation on them. This would then allow scaling to happen as expected. |
@snasovich we'll push this to 2.6.4 for now but if you do need this, Vince does have some capacity. |
@snasovich To confirm what steve/norman resource should the I've tried setting the annotation to |
@thedadams , could you please help answering Richard's question above? |
@richard-cox Sorry, there was a typo in Sergey's original message. The annotation does go on the |
Further Testing Previously Failed Test Cases:
Further testing While doing this then attempting to scale the node up no longer has a node reference when attempting to bring up another cluster. To Repeat This Issue
|
@richard-cox do you think the last comment is an issue? |
This should be fine. From my understanding a deleted machine should come back, so may be helpful if that instance is misbehaving. Whereas a scaled down machine will never come back and is permanent |
Setup For Testing Failed Test Cases:
Debugging So the option to scale down etcd with a singular etcd is available. The moment you scale the node down you will get the following error:
It appears the node still exists in the node driver's machine provider so it wasn't fully deleted. Screenshots |
Moving to done seeing as @richard-cox says the expected behavior was seen in my testing and the previously failed test case now passes. |
Confirmed with @catherineluse and @gaktive to add |
Detailed Description
For RKE1-provisioned clusters, there is currently an option to scale down specific nodes(s).
As dropdown action for a single node:
As an action button for multiple selected nodes:
The same functionality should be available for RKE2-provisioned clusters.
Context
This is needed for RKE2 provisioning parity with RKE1
Additional Details
It should be possible to achieve this by setting
cluster.k8s.io/delete-machine
annotation on the node(s) to be scaled down before calling back-end to update node pool(s) with an appropriate number of node counts per each affected node pool.Also, it looks like RKE1 case may be allowing invalid deletion requests (like scaling down the only control plane node). It would be nice to avoid such issues in RKE2 implementation. For example, I've managed to break my RKE1 cluster by attempting to scale down the only CP node and then scaling it back up (interestingly it was stuck at "Waiting for node to be removed from cluster" and operational until I attempted to scale node pool back up).
The text was updated successfully, but these errors were encountered: