Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

disaster recovery of etcd cluster #104

Open
rayliu419 opened this issue Mar 11, 2022 · 6 comments
Open

disaster recovery of etcd cluster #104

rayliu419 opened this issue Mar 11, 2022 · 6 comments

Comments

@rayliu419
Copy link

Hi guys,

In current exsiting etcd operator implemtation, when we use k8s to deploy etcd cluster and etcd cluster lost quorum, the operator does nothing to recover it.
https://github.com/tkestack/kstone/blob/master/third_party/etcd-operator/pkg/cluster/reconcile.go#L95-L97
Do you have any plan to recover the "lost quorum" etcd cluster?

Thanks,

@tangcong
Copy link
Contributor

We didn't use etcd-operator, we used the built-in kstone-etcd-operator.It has more complete support for persistent storage and better disaster tolerance. In the next version, we will support rebuilding lost-quorum clusters using snapshots, and kstone-dashboard will also support visualization operations.

@rayliu419
Copy link
Author

Hi tangcong, thanks for your reply. 

Based on the official wiki:

https://etcd.io/docs/v3.5/op-guide/runtime-configuration/# restart-cluster-from-majority-failure

When we use a dynamic way to deploy Etcd clusters(That's how it works in K8s), it appears that the only way to rebuild a lost-quorum cluster is to take a snapshot. The problem of snapshots is that if we use a snapshot to recover, we'll definitely lose some data(after the snapshot is taken, some new update are write to etcd, then k8s cluster is down). 
If use the static way to build a etcd cluster, I can simply restart all nodes to rebuild the etcd cluster without using snapshot, do you have some ideas about how to achive it if we use dynamic way?

@tangcong
Copy link
Contributor

We support storing data on persistent data disks. If it is not persistent, we can also force a snapshot from the healthy node to rebuild the cluster, so that the probability of losing data is very small. @rayliu419

@rayliu419
Copy link
Author

@tangcong ,

"We support storing data on persistent data disks. If it is not persistent"
I think most of k8s users will use PVC/PV as external storage, so that's not a problem. The problem is even if you have persistent data disks, you can't restore the cluster with these persistent data disks just as static way.

"we can also force a snapshot from the healthy node to rebuild the cluster, so that the probability of losing data is very small."
Yes, we can force the health node to dump snapshot, but here are two problems:

  1. Since etcd cluster continue to accept write all the time, you can't snapshot all writes, right?
  2. In some cases, such as k8s is down, you don't have a healthy node.

Actually, I don't know why etcd community doesn't have a solution to recover it without using snapshot(maybe they have, but I don't find it). In cloudnative enviorment, it's a critical case.

@rayliu419
Copy link
Author

I think I find a way to recover the data without losing data. Etcd supports --force-new-cluster to reconfigure the cluster, we can find the latest node in the etcd cluster to restart the cluster. If more than half pvc survive, we can recover it in this way.

@tangcong
Copy link
Contributor

tangcong commented Apr 3, 2022

Yes, I submitted an issue #kstone-io/kstone-etcd-operator#2 at the time, and the plan was implemented in this way. thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants