Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding/removing etcd nodes #43

Closed
anton-johansson opened this Issue Apr 1, 2019 · 2 comments

Comments

Projects
None yet
2 participants
@anton-johansson
Copy link
Contributor

anton-johansson commented Apr 1, 2019

I just tried adding and removing masters to my cluster (etcd is running on my master nodes). My plan was to replace my existing three (really bad) masters with three new masters.

I added the servers to the Ansible inventory and just ran the playbook. It didn't turn out the way I expected.

After googling around, I found out that adding new nodes to (and removing from) etcd requires you to use etcdctl. So to add nodes, you run (from one of the existing etcd nodes):

$ etcdctl --cert-file /etc/etcd/pki/etcd.pem --key-file /etc/etcd/pki/etcd-key.pem --ca-file /etc/etcd/pki/ca.pem --endpoints https://127.0.0.1:2379 member add k8s-master-10 https://<ip-address>:2380

Then you get some info back that you should use in your service file:

  --initial-cluster k8s-master-10=https://<ip-address>:2380,k8s-master-03=https://<ip-address>:2380,k8s-master-02=https://<ip-address>:2380,k8s-master-01=https://<ip-address>:2380
  --initial-cluster-state existing

Note the list of initial cluster nodes and also the initial cluster state.

After doing this manually for each node, I could run the Ansible playbook again to tweak the configuratin files so they are equal on all servers. That worked.

Same goes when removing servers. I had to do:

$ etcdctl --cert-file /etc/etcd/pki/etcd.pem --key-file /etc/etcd/pki/etcd-key.pem --ca-file /etc/etcd/pki/ca.pem --endpoints https://127.0.0.1:2379 member remove <node-id>

... before actually shutting them down and excluding them from the cluster.

This requires a bit of manual work (which is just fine, if it is necessary). Do you have any suggestion on how this can be automated in a better way? Or if not, could/should it be documented within KTRW?

@anton-johansson anton-johansson changed the title Adding/removing masters Adding/removing etcd nodes Apr 1, 2019

@amimof

This comment has been minimized.

Copy link
Owner

amimof commented Apr 2, 2019

I haven't really thought of a master/controll-plane scale out in ktrw. Usually you only scale out the nodes. An alternative would be to tear down the cluster and create a new one with additional masters or etcd's

@anton-johansson

This comment has been minimized.

Copy link
Contributor Author

anton-johansson commented Apr 2, 2019

Allright, yeah, it's probably not the most common scenario. Our case wasn't really scale out, but rather migrate to better servers. But I guess we can close this for now. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.