Data Replicating Appliance — generates realistic, continuous database and filesystem loads for Disaster Recovery (DR) replication testing.
It simulates the kind of production-like data churn that real DR testing requires, running as long-lived background processes (systemd services).
Continuously inserts batches of fake records into a MySQL users table using the Faker library. Enforces a row cap by deleting the oldest records when the limit is hit, maintaining a rolling window of data.
Continuously writes random binary files (via /dev/urandom) to a target directory. When the directory exceeds a configurable size limit, it deletes the oldest files to stay under the threshold.
| File | Purpose |
|---|---|
db_changer.py |
Main DB load generator |
db_changer.ini |
Config: DB credentials, batch size, field definitions |
db_wipe.py |
Utility to drop all tables and reset the DB |
file_changer.sh |
Filesystem load generator |
file_changer.ini |
Config: max dir size, chunk size, sleep interval, target path |
wiper.sh |
Stops services, deletes logs/data, wipes DB, clears history |
All tunable parameters live in .ini files — no code changes needed to adjust behavior.
db_changer.ini defaults:
max_rows: 100,000records_per_batch: 5,000sleep_time: 30 seconds- Fields: name, age, email, city, created_at, is_active, description, random_number
file_changer.ini defaults:
MAX_SIZE: 10GCHUNK_SIZE: 500MSLEEP_TIME: 30 secondsDIR: /opt/replisim/data
- Schema-flexible:
db_changer.pyreads field definitions from config and dynamically creates or migrates the MySQL table, including adding new columns to existing tables. - Bounded operation: Both tools implement rolling windows — the DB stays under
max_rowsand the directory stays underMAX_SIZE— so they run indefinitely without exhausting disk or database space. - Faker-backed: DB records use realistic fake data (names, emails, cities, dates, etc.) to more closely mimic real replication traffic.
- Python 3 +
mysql.connector+faker - Bash +
dd+du - MySQL
- systemd