Practical Analysis of Replication, Sentinel, and Clustering in Distributed Systems
Built with Python, Redis, and GNS3
- Overview
- Key Concepts
- System Architecture
- Experiment Scenarios
- Getting Started
- Usage
- Results & CAP Theorem
- How to Contribute
- License
DistributedSystem-Redis-Lab is a practical implementation for analyzing three fundamental mechanisms in distributed systems using Redis:
- Replication
- Sentinel (High Availability)
- Cluster (Sharding)
Through network simulations using GNS3 and Localhost, this project goes beyond theory by validating Eventual Consistency, measuring replication lag, testing Automatic Failover during server crashes, and demonstrating Load Balancing across distributed nodes.
The repository includes modular Python scripts for automated testing, logging, and visualization to empirically validate the CAP Theorem in real-world scenarios.
This project focuses on three core pillars of Redis distributed capabilities:
- Asynchronous data replication from a master node to one or more replicas
- Emphasis on Eventual Consistency
- Measurement of replication lag using Python-based monitoring scripts
- Continuous monitoring of Redis nodes
- Automatic failure detection and master promotion
- Ensures high availability with minimal downtime
- Horizontal scaling through automatic data partitioning
- Uses hash slots to distribute keys across multiple nodes
- Handles request routing transparently
Experiments are conducted in two environments:
-
GNS3 Network Simulation Simulates real-world network conditions such as latency, packet loss, and partitions.
-
Localhost Environment Used for rapid prototyping and functional validation of automation scripts.
- 1 Master Node (Read/Write)
- 2+ Replica Nodes (Read-only / Backup)
- 3 Sentinel Instances (Quorum-based decision making)
- Python Clients (Load generation, monitoring, and metrics collection)
The codebase is organized into modular experiment scenarios:
-
Consistency Check Measures the time difference between a write on the master and its visibility on replicas.
-
Failover Test Simulates a Redis master crash (e.g.,
SIGTERM) and records the time taken by Sentinel to elect a new master. -
Sharding Analysis Inserts bulk data into a Redis Cluster and verifies key distribution across hash slots and nodes.
- Python 3.8+
- Redis Server (local or VM-based)
- GNS3 (optional, for advanced network simulation)
- pip (Python package manager)
- Clone the repository:
git clone https://github.com/reinoyk/DistributedSystem_FP.git- Navigate to the project directory:
cd DistributedSystem_FP- Install Python dependencies:
pip install redis pandas matplotlib-
Redis Configuration Files Located in the
configs/directory (e.g.,redis_master.conf,redis_replica.conf,redis_sentinel.conf). -
Python Script Configuration Update IP addresses and ports in
config.pyor at the top of each script to match your GNS3 or localhost setup.
Start all Redis instances, then run:
python test_replication.pyOutput:
- Average replication lag (milliseconds)
- Timestamped logs for each write/read operation
Start Redis Sentinel processes, then execute:
python test_failover.pyWhile the script is running, manually stop the master Redis service.
Output:
- Failure detection timestamp
- New master election time
- Total failover duration
Ensure the Redis Cluster is properly initialized, then run:
python test_cluster.pyOutput:
- Distribution of keys across cluster nodes
- Mapping of keys to hash slots
The experimental results provide empirical insights into the CAP Theorem:
-
Consistency vs Availability During network partitions or master failures, Redis Sentinel prioritizes Availability by promoting a new master. This may temporarily sacrifice strong consistency, potentially causing minor data loss if the old master accepted writes before demotion.
-
Partition Tolerance & Sharding Redis Cluster demonstrates effective horizontal scalability and fault isolation, with the limitation that multi-key operations must reside in the same hash slot.
Detailed logs, metrics, and visualizations are available in the results/ directory.
Contributions are welcome and encouraged.
- Fork the project
- Create a new feature branch:
git checkout -b feature/NewScenario- Commit your changes
- Push to your branch
- Open a Pull Request
Possible extensions include:
- Redlock (Distributed Locking)
- AOF vs RDB persistence benchmarking
- Network partition tolerance experiments
This project is licensed under the MIT License. See the LICENSE file for details.