Skip to content

RedJOe-0608/raft-node-ts

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Raft Consensus Algorithm — Naive Implementation

A naive, educational implementation of the Raft Consensus Algorithm built with TypeScript and Express. Three nodes, HTTP-based RPCs, and a simple in-memory key-value store as the state machine.

This is not production-ready. It's a learning exercise that covers the core of Raft — leader election, log replication, and commit tracking. Limitations are documented below.

Companion blog post: I Tried to Implement the Raft Consensus Algorithm. Here's What Happened.


What's Implemented

  • Leader election with randomised election timeouts
  • Vote requesting and granting (with log up-to-date checks per §5.4.1)
  • Heartbeats to suppress spurious elections
  • Log replication via AppendEntries RPC
  • Log reconciliation — leader walks back nextIndex on rejection until logs converge
  • Commit index advancement once majority replication is confirmed
  • Application of committed entries to a key-value state machine

What's Not Implemented

  • Persistence (term, votedFor, log are all in-memory — a crashed node loses everything)
  • Log compaction / snapshots
  • Fully linearizable reads
  • Cluster membership changes

Project Structure

src/
├── server.ts       # Express HTTP server, wires routes to RaftNode
├── RaftNode.ts     # Core Raft logic — elections, replication, commits
└── Store.ts        # Simple in-memory key-value state machine
docker-compose.yml  # Spins up three nodes (A, B, C)
Dockerfile

Running Locally

Prerequisites

Start the cluster

docker-compose up --build

This spins up three nodes:

Node Port
A 3000
B 3001
C 3002

Within a few hundred milliseconds, one node will win an election and become leader. You can check which one by hitting the status endpoint on each node.


API Reference

GET /:port/ — Node status

curl http://localhost:3000/
{
  "nodeId": "A",
  "state": "leader",
  "currentTerm": 1,
  "commitIndex": 2,
  "lastApplied": 2,
  "logLength": 3,
  "store": { "x": "42" }
}

POST /command — Write a key (send to leader)

# Set a key
curl -X POST http://localhost:3000/command \
  -H "Content-Type: application/json" \
  -d '{"op": "set", "key": "x", "value": "42"}'

# Delete a key
curl -X POST http://localhost:3000/command \
  -H "Content-Type: application/json" \
  -d '{"op": "delete", "key": "x"}'

Writes must go to the leader. If you POST to a follower, it returns { "success": false, "message": "Not the leader" }.


GET /read/:key — Read a key (leader only)

curl http://localhost:3000/read/x
{ "key": "x", "value": "42" }

Testing It

1. Check who the leader is

curl http://localhost:3000/ | jq .state
curl http://localhost:3001/ | jq .state
curl http://localhost:3002/ | jq .state

One will say "leader", the others "follower".

2. Write a value through the leader

curl -X POST http://localhost:3000/command \
  -H "Content-Type: application/json" \
  -d '{"op": "set", "key": "name", "value": "raft"}'

3. Read it back from any node

curl http://localhost:3000/read/name
curl http://localhost:3001/read/name   # will reject — reads are leader-only

4. Simulate a leader crash

Find which port is the leader, then stop that container:

docker-compose stop node-a   # or node-b / node-c

Within ~300ms, the remaining two nodes will detect the missing heartbeats, hold an election, and elect a new leader. Check with:

curl http://localhost:3001/
curl http://localhost:3002/

You should see one of them flip to "leader" with an incremented currentTerm.

5. Bring the old node back

docker-compose start node-a

The restarted node comes back as a follower and catches up via log replication from the new leader. Check its / endpoint — the store should reflect all committed writes.

6. Write during a crash

Try writing a key, stopping the leader mid-cluster-life, and verifying that committed entries survive on the remaining nodes. Uncommitted entries (those that hadn't reached majority before the crash) may be lost or overwritten — that's expected behaviour in this implementation.


How It Works (Brief)

Each node is in one of three states: follower, candidate, or leader.

  1. All nodes start as followers with a randomised election timer.
  2. If a follower's timer fires without hearing from a leader, it becomes a candidate and requests votes.
  3. A candidate that gets votes from a majority becomes leader and starts sending heartbeats.
  4. All writes go through the leader. The leader appends the entry to its log, replicates it to peers, and only commits once a majority acknowledges it.
  5. Committed entries are applied to the key-value store in order.

For the full breakdown, see the companion blog post or the Raft paper.


References

About

node.js and typescript implementation of the Raft Consensus Algorithm

Topics

Resources

Stars

Watchers

Forks

Contributors