Skip to content

Latest commit

 

History

History
114 lines (77 loc) · 3.69 KB

README.md

File metadata and controls

114 lines (77 loc) · 3.69 KB

raftkv

Build Status

distributed key value storage with raft

most of the code is from etcd , just for reading and learning its source code

Getting Started

install from remote

go get github.com/ejunjsh/raftkv/cmd/raftkvd

build and test locally

cd $GOPATH/src/github.com/ejunjsh/raftkv/
sh ci.sh

Running single node

First start a single-member cluster of raftkvd:

raftkvd --id 1 --cluster http://127.0.0.1:12379 --port 12380

Each raftkvd process maintains a single raft instance and a key-value server. The process's list of comma separated peers (--cluster), its raft ID index into the peer list (--id), and http key-value server port (--port) are passed through the command line.

Next, store a value ("hello") to a key ("my-key"):

curl -L http://127.0.0.1:12380/my-key -XPUT -d hello

Finally, retrieve the stored key:

curl -L http://127.0.0.1:12380/my-key

Running a local cluster

raftkvd --id 1 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 12380
raftkvd --id 2 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 22380
raftkvd --id 3 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 32380

This will bring up three raftkvd instances.

Now it's possible to write a key-value pair to any member of the cluster and likewise retrieve it from any member.

Fault Tolerance

To test cluster recovery, first start a cluster and write a value "foo":

# start all nodes before
raftkvd --id 1 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 12380
raftkvd --id 2 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 22380
raftkvd --id 3 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 32380

curl -L http://127.0.0.1:12380/my-key -XPUT -d foo

Next, remove a node and replace the value with "bar" to check cluster availability:

# kill the node 2 before
kill ...
curl -L http://127.0.0.1:12380/my-key -XPUT -d bar
curl -L http://127.0.0.1:32380/my-key

Finally, bring the node back up and verify it recovers with the updated value "bar":

# restart node 2 before
raftkvd --id 2 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 22380
curl -L http://127.0.0.1:22380/my-key

Dynamic cluster reconfiguration

Nodes can be added to or removed from a running cluster using requests to the REST API.

For example, suppose we have a 3-node cluster that was started with the commands:

raftkvd --id 1 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 12380
raftkvd --id 2 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 22380
raftkvd --id 3 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 32380

A fourth node with ID 4 can be added by issuing a POST:

curl -L http://127.0.0.1:12380/4 -XPOST -d http://127.0.0.1:42379

Then the new node can be started as the others were, using the --join option:

raftkvd --id 4 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379,http://127.0.0.1:42379 --port 42380 --join

The new node should join the cluster and be able to service key/value requests.

We can remove a node using a DELETE request:

curl -L http://127.0.0.1:12380/3 -XDELETE

Node 3 should shut itself down once the cluster has processed this request.

reference

https://github.com/etcd-io/etcd