Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[docs] In HA installs, wait for the first node to be ready before joining others #895

Closed
rancher-max opened this issue Apr 19, 2021 · 11 comments
Labels
kind/documentation Improvements or additions to documentation

Comments

@rancher-max
Copy link
Contributor

Need to update documentation to call out the importance of waiting for the initial node to be running before joining other server nodes, due to the limitations in etcd learners.

@Martin-Weiss
Copy link

When updating the docs with this - please also add the info on how to test local and remote if the first node is ready and etcd is not in a status where an other secondary master is in process of joining..

@massep88
Copy link

Maybe something interesting to add in the doc : While experiencing issues with a simultaneous rke2 / etcd servers start, I could validate that for example a 1s wait between rke2 / etcd starts was ok.

@teebeey
Copy link

teebeey commented May 6, 2021

Is this only when joining the second master, or do we have to wait between all the masters joining?

@brandond
Copy link
Contributor

brandond commented May 6, 2021

Only one node can join the etcd cluster at a time, so all servers (we don't have masters) need to wait for previous nodes to complete their join before joining.

@massep88
Copy link

massep88 commented May 7, 2021

Yes and this would be great if this is included in the documentation.
Validating above @brandond comment I have re-tested this, adding a specific test for etcd launch on the previous master before moving on with the next one.

@davidnuzik
Copy link
Contributor

@brandond why did you unassign @rancher-max ?
cc: @cjellick

@brandond
Copy link
Contributor

brandond commented Jun 14, 2021

When running through milestone items after @cjellick dropped and asked the rest of the team to finish moving things out of the v1.21.2+rke2r1 milestone, this issue came up and @rancher-max indicated that it was unclear why he was assigned an issue that's not ready for testing. If he's expected to write the documentation and put in the PR then someone needs to remind him.

@davidnuzik
Copy link
Contributor

Oh okay gotcha. @rancher-max and I synced on this and he agreed he could do the docs work here previously. I'll reassign to him.

@braunsonm
Copy link

braunsonm commented Oct 5, 2021

Can this limitation be wrapped by rke2 server rather than relying on the user to stagger the joining of master nodes? RKE2's binary could do an random exponential backoff and retry rather than require the delay.

This means bootstrapping RKE2 clusters using tools like Terraform get a little more painful. Typically you'd create a group of master nodes using something like a for_each or count meta-argument like so:

resource "virtual_machine" "server_nodes" {
  for_each = {
    master1 = 1.1.1.1
    master2 = 1.1.1.2
    master3 = 1.1.1.3
  }

 ....
}

The only way to truly force the first server node to be online before creating the others is to introduce dependencies between server nodes. This creates a lot of problems because if you need to recreate the 1st server node, all the dependencies need to be destroyed which destroys the whole server plane.

The other option which is more of a hack, is when your cloud-init script runs systemctl enable --now rke2-server to introduce a sleep infront which hopefully is enough of a wait to ensure the previous node has joined. This doesn't work a lot of the time.

@brandond
Copy link
Contributor

brandond commented Oct 5, 2021

You do need to have dependencies between nodes though - exactly one of your nodes must be started with --cluster-init (which is implicit on RKE2 when not providing --server); the remainder must be started with --server pointing at that first node, or at some other fixed registration endpoint that is backed by that node.

That said we will probably eventually add some retry behavior so that joins work better; this is tracked under #897

@rancher-max
Copy link
Contributor Author

This doesn't really apply anymore as in all of the latest releases it is actually possible to run all the rke2-server processes at the same time, so I'm going to close this docs issue. See #349 for details on the fix.

Development [DEPRECATED] automation moved this from Backlog to Done Issue / Merged PR May 10, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/documentation Improvements or additions to documentation
Projects
No open projects
Development [DEPRECATED]
Done Issue / Merged PR
Development

No branches or pull requests

8 participants