This is a simple asynchronous job queue and worker system built with Node.js and Redis, including:
- Producer (
producer.js) that creates jobs - Redis list queues for active jobs (
LPUSH/BRPOP) - Multiple workers (
worker.js) that process jobs concurrently - Retries and dead-letter queue handling
- Delayed jobs via Redis sorted set + scheduler (
scheduler.js) - Priority queues (high priority before normal)
The focus is educational: clear code and realistic infrastructure behavior without heavy frameworks.
The producer builds a structured JSON job:
idtypepayloadretryCounttimestamppriority
Then it either:
- pushes directly into active queue with
LPUSH - or schedules it in delayed set (
ZADD) with a future timestamp score
Redis keys used:
jobs:high- high-priority list queuejobs:normal- normal-priority list queuejobs:dead-letter- dead-letter queuejobs:delayed- delayed jobs sorted set
Workers consume with BRPOP in this order:
jobs:highjobs:normal
This gives high-priority jobs preference.
Each worker process:
- waits for jobs via blocking pop (
BRPOP) - processes jobs asynchronously
- supports concurrency using an in-flight Promise set
- logs lifecycle events
When a job fails:
- increment
retryCount - if retry limit not reached: requeue job
- if retry limit exceeded: push to dead-letter queue
The scheduler runs every few seconds and:
- reads due jobs from
jobs:delayed(score <= now) - moves each ready job into high/normal active queue
- removes moved job from delayed set
- Node.js 18+
- Redis server (local install) for local mode
- Docker + Docker Compose for containerized mode
npm installIf Redis is installed locally, start it with your preferred command. Example:
redis-serverTerminal 1:
npm run schedulerTerminal 2 (and more terminals for additional workers):
npm run workerOptional worker tuning:
WORKER_ID=worker-a WORKER_CONCURRENCY=5 MAX_RETRIES=3 npm run workerNormal job:
npm run producer -- send-email '{"to":"team@example.com","workMs":700}'High-priority job:
npm run producer -- generate-report '{"reportId":42,"workMs":500}' --priority highDelayed job (runs after 5 seconds):
npm run producer -- sync-data '{"source":"crm","workMs":900}' --delay 5000Flaky job (random failures -> retries):
npm run producer -- resize-image '{"file":"photo.png","workMs":600}' --fail-rate 0.7Always-fail job (moves to dead-letter after max retries):
npm run producer -- billing-charge '{"invoiceId":"inv-123","shouldFail":true}'docker compose builddocker compose up -d redis scheduler workerScale workers (example: 3 workers):
docker compose up -d --scale worker=3 workerNormal job:
docker compose run --rm producer send-email '{"to":"team@example.com","workMs":700}'High-priority job:
docker compose run --rm producer generate-report '{"reportId":42,"workMs":500}' --priority highDelayed job:
docker compose run --rm producer sync-data '{"source":"crm","workMs":900}' --delay 5000Failing job:
docker compose run --rm producer billing-charge '{"invoiceId":"inv-123","shouldFail":true}'docker compose logs -f scheduler workerdocker compose downRemove Redis volume too:
docker compose down -vREDIS_URL(default:redis://127.0.0.1:6379)WORKER_ID(optional worker label)WORKER_CONCURRENCY(default:3)MAX_RETRIES(default:3)SCHEDULER_INTERVAL_MS(default:2000)SCHEDULER_BATCH_SIZE(default:100)
- Producer creates a job and pushes to Redis.
- Worker pops from high-priority queue first, then normal queue.
- Worker logs start/completion or failure.
- Failed jobs are retried with incremented
retryCount. - Jobs that exceed max retries move to dead-letter queue.
- Scheduler promotes delayed jobs into active queues when due.
redisClient.js- Redis connection setupjob.js- job model and serialization helpersqueue.js- queue operations, delayed jobs, retry/dead-letter logicproducer.js- CLI producerworker.js- concurrent worker processscheduler.js- delayed job schedulerDockerfile- app container image for worker/scheduler/producerdocker-compose.yml- local multi-container orchestration