-
Notifications
You must be signed in to change notification settings - Fork 5
Automatically exported from code.google.com/p/lite-raft
License
nackstein/lite-raft
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
lite-raft is still alpha stage. the raft module is complete but some rewrite can occure when I come up with more elegant code. lite-raft implement the raft consensus quorum algorithm. for more information on the raft algorithm see http://raftconsensus.github.io on top of the consensus module lite-raft provide some scripts to implement failover cluster for generic "packages". A package consist in a collection of resource (IPs, LUNs) and programs that can run on multiple server in a active / stand-by scenario. ====================================================================== quick tutorial on raft module: here the steps you need to do manually for now: - configure conf/cluster_nodes with a space separated list of hostname (those that will form the cluster) - configure conf/election_timeout with a number (i use from 3 to 10, but for production I would not go under 10) that represent the seconds before a follower turn into candidate. - configure conf/snapshot_period with the interval between snapshots. this values is the number of logs before snapshotting, it's not time related. - verify that the user you choose to run lite-raft with is able to execute ssh between nodes without password and without confirmation for ssh key fingerprint. - copy the lite-raft directory on every node and execute it with: ./lite-raft first-boot to stop it on a node: ./lite-raft stop to start (from second time and on): ./lite-raft start to insert a test entry ./lite-raft-client set key value to make a conditional set: ./lite-raft-client if key = X set Y to make a get: ./lite-raft-client get key to add a node to the cluster: ./lite-raft-client add-member node2 to delete a node to the cluster: ./lite-raft-client del-member node2 (cluster configuration change are performed online!) ====================================================================== quick tutorial on pkg module: copy the pkg directory on the nodes that will handle the failover package. those nodes can be different from those that run lite-raft!!! configure pkg/lite-raft-remote with the list of lite-raft cluster servers copy pkg/control-scripts/template to a new file (eg. testpkg1) edit testpkg1 and set some package parameters. configure the lite-raft state machine to contain the package configuration: ./pkg init testpk1 run the package on every node: ./pkg run testpkg1 and verify its status: ./pkg status testpk1 the stop it on the active node: ./pkg stop testpkg1 and watch it respawn on the stand-by node! (pkg module is still in early stage of development, it's more a proof of concept than a finished work.) read the code and give me feedback if you like it or not :) my mail: luigi.tarenga@gmail.com enjoy! required binaries (should be complete...): chmod cp date head hostname ls ln mv mkdir od pkill pgrep pax rm rmdir sed sleep (with decimal support) sort ssh perl (on HP-UX or where sleep don't support decimal)
About
Automatically exported from code.google.com/p/lite-raft
Resources
License
Stars
Watchers
Forks
Releases
No releases published