Skip to content

A node js library with a distributed leader/follower algorithm ready to be used

License

Notifications You must be signed in to change notification settings

miladabc/ring-election

 
 

Repository files navigation

Ring election

Is your dream to build a service like cassandra,kafka,zipkin,jaeger,redis,etc...? You are in the right place , join ring-election project !!!

Coverage Status Build Status Actions Status Codacy Badge npm version Gitter chat
JavaScript Style Guide

Contents

Getting started
Overview
Use cases
Config
Monitoring
Vision
High level design
Contribute
Versioning
License

Getting started

Install with npm !
  npm i ring-election --save

Example You do not need to choose a node as leader , just indicate all your nodes and start everyone as follower.
The first node to start will be the leader , the leader do not have assigned partitions so try to start 2 instances after your integration

How to integrate

const ring = require('ring-election')
let follower = ring.follower
const {
  BECOME_LEADER,
  PARTITIONS_ASSIGNED,
  PARTITIONS_REVOKED
} = ring.constants;
follower.createClient()
// if you want REST API as monitoring , invoke startMonitoring
follower.startMonitoring()
// to get ring info
ring.follower.ring()
// to get assigned partitions
let assignedPartitions = ring.follower.partitions()
// now let me assume that a follower will create some data
// and you want to partition this data
let partition = ring.follower.defaultPartitioner('KEY')
// save your data including the partition on a storage
// you will be the only one in the cluster working on the partitions assigned to you.

// If you want to handle partitions assigned
// ( use other constants to listen other events ) you can do in this way.
ring.follower.eventListener.on(PARTITIONS_ASSIGNED , (newAssignedPartitions) => {
   // DO STUFF
})

Try this to better understand the behaviour

  docker-compose up

Check assigned partitions to local:9000/status or change the port to 9001/9002

Try to stop and restart container and observe the behaviour.
If you want to develop new features or fix a bug you can do that without docker images , just configure environment variables correctly ( you can see them on [docker-compose.yaml] file) .

Overview and rationale

In modern systems it is often needed to distribute the application load to make the system scalable so that every data is processed by a single instance.
Ring-election is a driver that implements a distributed algorithm that assigns to each node the partitions to work on . In a simple use case each node can obtain data that are part of the partitions of which it is owner and work on them.
The algorithm will assign to each node one or more partitions to work with.
A node will be removed if it does not send an hearth beat for a while , this process is called heart check.
Each node in the ring will have an ID and a priority , if the leader node will die the node with lower priority will be elect as leader.
If a node is added or removed from the cluster, the allocated partitions will be rebalanced.

What the ring-election driver offers you ?

  • A default partitioner that for an object returns the partition to which it is assigned.
  • Mechanism of leader election
  • Failure detection between nodes.
  • Assignment and rebalancing of partitions between nodes
  • Automatic re-election of the leader

What problems can you solve with this driver ?

  • Scalability
  • High availability
  • Concurrency between nodes in a cluster
  • Automatic failover

Use cases

This section introduce you on what you can build on top of ring-election using it as driver/library.

Distributed Scheduler
Each Scheduler instance will work on the assigned partitions .
A real implementation of this use case is available here https://github.com/pioardi/hurricane-scheduler
Dynamic diagram

Distributed lock
Distributed cache
Distributed computing

Try it out !

   docker image build -t ring-election .
   docker-compose up

Configuration

PORT : The leader will start to listen on this port , default is 3000
TIME_TO_RECONNECT: The time to wait for a follower when he has to connect to a new leader in ms , default is 3000ms
HEARTH_BEAT_FREQUENCY: The frequency with which a hearth beat is performed by a follower , default is 1000ms
HEARTH_BEAT_CHECK_FREQUENCY: The frequency with which an hearth check is performed by a leader , default is 3000ms
LOG_LEVEL: Follow this https://www.npmjs.com/package/winston#logging-levels , default is info.
NUM_PARTITIONS: Number of partitions to distribute across the cluster , default is 10.
SEED_NODES : hostnames and ports of leader node comma separated, Ex . hostname1:port,hostname2:port
MONITORING_PORT : port to expose rest service for monitoring , default is 9000

Monitoring API

To monitor your cluster contact any node on the path /status (HTTP verb : GET) or contact a follower node on /partitions (HTTP verb : GET).

TODO List

Re-add a client in the cluster when it was removed and send an hearth beat
Implement event emitter to notify library users when something happens

High Level Diagram

See wiki page.

How to contribute

See contributing guidelines CONTRIBUTING

Versioning

We use (http://semver.org/) for versioning.

License

This project is licensed under the MIT License - see the [LICENSE](./LICENSE) file for details

About

A node js library with a distributed leader/follower algorithm ready to be used

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • JavaScript 97.8%
  • Dockerfile 1.1%
  • Shell 1.1%