Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Q: Scale Master Nodes? #1543

Closed
Berndinox opened this issue Sep 27, 2021 · 2 comments
Closed

Q: Scale Master Nodes? #1543

Berndinox opened this issue Sep 27, 2021 · 2 comments
Labels
sig/cluster-management Denotes a PR or issue as being assigned to SIG Cluster Management. triage/support Indicates an issue that is a support question.

Comments

@Berndinox
Copy link

Hej!

I've spin up a fresh Cluster on Hetzner. Works great so far.

I found some guides on how to scale and size the worker nodes, but no for master nodes.
Because i've choosen "relatively" big master-nodes when creating the cluster, i would like to scale the down a bit.

How can i scale down/up the VM-Size of all three master nodes?

Vertical Scale Masters

@Berndinox Berndinox added sig/cluster-management Denotes a PR or issue as being assigned to SIG Cluster Management. triage/support Indicates an issue that is a support question. labels Sep 27, 2021
@xmudrii
Copy link
Member

xmudrii commented Sep 28, 2021

Scaling up the control plane nodes is a bit more complicated. That's because the worker nodes can be easily replaced and/or powered off, but that's not the case for the control plane nodes.

In this case, you can't rely on Terraform to scale up the control plane nodes for you, because Terraform would power off all control plane nodes at the same time, which could make etcd lose the quorum (in that case your cluster would be corrupt/lost).

I haven't tested it, but I think you might have success if you do something like this:

  • Drain one of the control plane nodes (kubectl drain <node-name>)
  • SSH to that node and power it off (sudo poweroff)
  • Login to Hetzner Control Panel and scale up/down the instance
  • Power on the instance after scaling it
  • Wait for the node to come up
    • Make sure kubectl get nodes returns Ready for that node
    • Just to make sure, check etcd logs for that node (run kubectl get pods -n kube-system and then find the etcd pod for the node you restarted (each etcd pod has the node hostname in its name))
  • Repeat the steps for the remaining two control plane nodes (make sure that you do it for only one node at a time)
  • After completing all steps for all control plane nodes, update the instance size in terraform.tfvars file
  • Run terraform refresh to update your Terraform state with the changes you did manually
  • Run terraform plan and make sure that it doesn't propose any additional changes

Note that some providers might not allow you to scale down beyond the size you used when creating the instance. I think Hetzner might allow you to scale down, but if that's not the case, you'll have to use a different approach.

@Berndinox
Copy link
Author

Berndinox commented Sep 29, 2021

can confirm, the procedere does work!

Master size is defined inside variables.tf

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sig/cluster-management Denotes a PR or issue as being assigned to SIG Cluster Management. triage/support Indicates an issue that is a support question.
Projects
None yet
Development

No branches or pull requests

2 participants