Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Guide on how to fail-over with Raft and DBreeze #55

Closed
Lectere opened this issue Jul 14, 2019 · 10 comments
Closed

Guide on how to fail-over with Raft and DBreeze #55

Lectere opened this issue Jul 14, 2019 · 10 comments

Comments

@Lectere
Copy link

Lectere commented Jul 14, 2019

I have a Raft test working, looks really cool. I need to keep track of a list of usernames with a counter (integer) over 2-6 redundant nodes.

Is there like a guide on how to do this with Raft/Dbreeze? or maybe a general guide on how a distributed key, value pair list is done with Raft/Dbreeze?

Thnx

@hhblaze
Copy link
Owner

hhblaze commented Jul 14, 2019

With Raft you are sending commands, so send command AddUser. If will fire in all nodes inside of oncommitted callback, open there transaction and write your user into favorite Db, that is all - user is stored everywhere. You can grab your user from any node... Sorry, no code within nearest 3 weeks...Vacation...

@Lectere
Copy link
Author

Lectere commented Jul 15, 2019

I'll gladly wait for three weeks!, Have a nice vacation!

That all makes sense. But that seems like the 'easy' part, And with easy I mean, easy using Raft. Now a node has a total crash, harddisk gone, out for at least 24 hours. OS and software reinstalled, the node joins the Raft network again. Is there a away to get the complete set from the leader, without having to send the complete set to every node?

@hhblaze
Copy link
Owner

hhblaze commented Jul 15, 2019

Ok, your concern is clear, currently system will send all commands from the first one to the newly added node, there is no other faster mechanism of restoration. But such mech. is possible. I will think ab. this for the future implementation. Currently The easiest way is to stop working node, clone its content to the newly added node and start both nodes. That can be achieved with 3rd tools.

@hhblaze
Copy link
Owner

hhblaze commented Jul 15, 2019

As an explanatory add on: if you have 5 nodes, one crashes, 4 others go on to live, we join a clean node instead of crashed one, it connects to the leader and gets from it in historical sequence all commands restoring its state. At the moment of restoration leader will be a bit under load giving the history to the newly added node, the other nodes will function in standard mode.

@Lectere
Copy link
Author

Lectere commented Jul 15, 2019

A total historical sequene is not necessary. If user has changed from 0 to 1, from 1 to 2, from 2 to 3. You don't need to know everything, just the current value, I only need to know the value is 3 now.

So,the clean node, or a crashed one, that's all the same to me. But it should go to the leader for a copy of the current state, during which commits are suspended. And that should be traffic from the leader to the new/clean node. Not to every node in the system. I think the word for that is 'To Catch Up' :)

Btw go enjoy your vacation :)

@hhblaze
Copy link
Owner

hhblaze commented Jul 15, 2019

It works exactly like you need, also you can setup entity to remember only latest state, without history, check config in docu

@Lectere
Copy link
Author

Lectere commented Jul 15, 2019

I have read the documentation, but I can't find this section. Maybe I've overlooked it, can you point me to the specific document and page/section/chapter, thnx

@hhblaze
Copy link
Owner

hhblaze commented Jul 15, 2019

Screenshot_20190715-164523

@hhblaze hhblaze closed this as completed Jul 15, 2019
@hhblaze
Copy link
Owner

hhblaze commented Jul 15, 2019

It is Raft docu, persistence in memory with only latest entity state

@hhblaze hhblaze reopened this Jul 15, 2019
@Lectere
Copy link
Author

Lectere commented Jul 29, 2019

To what level DBreeze was available to store your own dataset is still unclear to me. I've already constructed a mechanism that transfers on request the total dataset to other nodes. I only use raft to determine who’s the leader. I have some other questions, but I’ll post them on the Raft git.

@Lectere Lectere closed this as completed Jul 29, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants