-
Notifications
You must be signed in to change notification settings - Fork 229
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is it normal if only one node is alive, the leader id becomes -1? #84
Comments
Hi @xuluna, In the original Raft (and also all other quorum-based consensus protocols), that's right. The cluster cannot make any decision and becomes read-only if it loses quorum (a majority of nodes). In 2-node cluster, the minimum size of quorum is 2, so that if at least one node goes down, the cluster is no longer able to serve write operation (In 3-node cluster, one node down will be tolerated safely). But in NuRaft, we support custom quorum size. If you manually reduce the size of quorum to 1, the 2-node cluster will be available even though one node goes down. But please note that it does not guarantee the safety against data loss or log diverging. |
That's very awesome. Thank you! |
@greensky00 If I set the custom quorum size and custom election size to 1, and later on node 2 is back, can it be detected? or I have to manually set sizes back again and add_srv again? The log will be automatically synced as well? |
@xuluna Regarding the custom quorum size, you need to manually set the size back to |
@greensky00 Thank you and happy new year! |
I have two nodes in the cluster {1,2}, the leader is 1. Now 2 is disconnected, and some error message is shown like this (on 1):
Error: raft_server.cxx:check_leadership_validity:856: 1 nodes (out of 2, 2 including learners) are not responding longer than 2500 ms, at least 2 nodes (including leader) should be alive to proceed commit
Error: raft_server.cxx:check_leadership_validity:858: will yield the leadership of this node
Is it by design that raft cannot work with only one node?
The text was updated successfully, but these errors were encountered: