A high-performance concurrent task queue built in Go, featuring worker pools, retry logic, and graceful shutdown.
This project demonstrates how to design and implement a scalable job processing system using Go’s concurrency primitives.
It simulates a distributed workload where multiple workers process jobs with retry logic and failure handling.
- 🧵 Worker pool with configurable concurrency
- 🔁 Retry mechanism with max retry limit
- ❌ Permanent failure handling
- 📦 Channel-based job queue
- 🧠 Deterministic behavior under concurrency
- 🛑 Graceful shutdown (no job loss, no goroutine leaks)
- 📊 Performance benchmarking (jobs/sec)
- 🔌 Decoupled processing logic via dependency injection
+-------------------+
| Job Queue |
| (channel) |
+--------+----------+
|
+----------------+----------------+
| | | | |
Worker Worker Worker Worker ...
| | | |
+--------+--------+--------+
|
Result Aggregation
(success / failure)
Each job:
- is retried up to 3 times
- logs retry attempts
- is marked as FAILED permanently after max retries
The system ensures:
- all jobs are processed before exit
- no goroutines are leaked
- clean termination of workers
Run:
go test -bench=. -benchmem ./internal/workerExample result:
BenchmarkWorkerPool-8 148 8269285 ns/op 39598 B/op 765 allocs/op
- ~120,000 jobs/sec (8 workers, lightweight processing)
- Throughput increases with concurrency
- Scaling is not linear due to:
- scheduling overhead
- shared resource contention
- Results are consistent across runs, proving deterministic behavior
go run cmd/app/main.go -workers=4 -jobs=100-workers→ number of concurrent workers-jobs→ total number of jobs
- Go concurrency (goroutines + channels)
- Worker pool pattern
- Dependency injection (decoupled processing logic)
- Fault tolerance via retries
- Synchronization and coordination
- Performance measurement
- Clean system shutdown
- Exponential backoff for retries
- Context-based cancellation
- Metrics export (Prometheus)
- Persistent queue (Redis / Kafka)
- Rate limiting
Michel Bevilacqua