🚦 Queue Server 🚦
╔═══════════════════════════════════════╗
║ Request 1 → [🔒 Processing...] ║
║ Request 2 → [⏳ Waiting...] ║
║ Request 3 → [⏳ Waiting...] ║
║ Request 4 → [⏳ Waiting...] ║
╚═══════════════════════════════════════╝
ID-based locking in action!
A demonstration project with two servers implementing ID-based resource locking:
- Express Server: Locks resources per ID, returning 423 (Locked) for concurrent requests
- FastAPI Server: Proxies requests to Express with automatic retry logic for seamless queueing
- Windows 10/11 (for
setup.bat) - Python 3.8+ (for FastAPI server)
- Node.js (will be installed by setup script if missing)
- oha (HTTP load testing tool, will be installed by setup script if missing)
Run the setup script which will install all dependencies:
.\setup.batThis will:
- Check for Node.js and install it via winget if missing
- Check for oha and install it via winget if missing
- Install npm packages (Express, Morgan)
- Display next steps
Note: If Node.js or oha is installed, you'll need to restart your terminal and run setup.bat again.
If you prefer to install manually:
-
Install Node.js dependencies:
npm install
-
Install Python dependencies:
npm run fastapi:install
Or manually:
cd api pip install -r requirements.txt
The Express server implements ID-based locking with normally distributed processing times.
npm run express:startServer runs on http://localhost:3000
Endpoint: GET /lock?id={id}
- Returns 200 with success message after processing (~1 second with variance)
- Returns 423 (Locked) if the same ID is already being processed
- Returns 400 if
idparameter is missing
The FastAPI server proxies requests to Express with automatic retry logic.
npm run fastapi:startServer runs on http://localhost:8000
Endpoint: GET /request/{id}
- Automatically retries when Express returns 423 (Locked)
- Returns 200 once the lock is acquired and processing completes
- Returns 504 if max retries exceeded
- Returns 503 if Express server is unavailable
The project includes several load testing scripts using oha:
# Default load test (1000 requests, 50 concurrent, IDs 1-5)
npm run express:load# Default test
npm run fastapi:load
# Light load (100 requests, 10 concurrent)
npm run fastapi:load:light
# Heavy load (1000 requests, 100 concurrent, IDs 1-5 - high contention)
npm run fastapi:load:heavy
# Single ID test (maximum lock contention)
npm run fastapi:load:single-id
# Many IDs (IDs 10-99, less contention)
npm run fastapi:load:many-ids
# Extreme test (10000 requests, 200 concurrent)
npm run fastapi:load:extremeAll load test scripts support additional oha parameters:
# Custom number of requests
npm run fastapi:load -- -n 5000
# Custom requests and connections
npm run fastapi:load -- -n 2000 -c 100
# Different duration
npm run fastapi:load -- -z 30s- Uses in-memory
Mapto track locks per ID - Processing time follows normal distribution (mean: 1000ms, σ: ~167ms)
- Logs all requests with HTTP status codes via Morgan middleware
- Returns 423 immediately if resource is locked
- Makes HTTP requests to Express server via httpx
- Implements retry logic with configurable parameters:
MAX_RETRIES: 100 attemptsRETRY_DELAY_MS: 100ms between retries- Timeout: 30 seconds
- Queues requests transparently by retrying on 423 responses
queue/
├── server.js # Express server with locking
├── package.json # npm scripts and dependencies
├── setup.bat # Automated Windows setup script
├── api/
│ ├── main.py # FastAPI proxy server
│ ├── requirements.txt
│ └── README.md
└── README.md
# Terminal 1: Start Express
npm run express:start
# Terminal 2: Run load test
npm run express:loadExpected: Many 423 responses due to lock contention.
┌─────────┐
│Request 1│ → 🔒 Lock acquired → ✅ 200 OK (after ~1s)
└─────────┘
┌─────────┐
│Request 2│ → ⛔ Lock held → ❌ 423 Locked (immediate)
└─────────┘
┌─────────┐
│Request 3│ → ⛔ Lock held → ❌ 423 Locked (immediate)
└─────────┘
# Terminal 1: Start Express
npm run express:start
# Terminal 2: Start FastAPI
npm run fastapi:start
# Terminal 3: Run load test
npm run fastapi:loadExpected: 100% success rate with FastAPI handling retries automatically.
┌─────────┐
│Request 1│ → FastAPI → Express 🔒 → ✅ 200 OK
└─────────┘ ↓
(1 sec)
┌─────────┐ ↓
│Request 2│ → FastAPI → Express ⛔ 423
└─────────┘ ↓ ↑
retry retry retry
↓ ↓ ↓
wait... wait... 🔒 → ✅ 200 OK
Everyone gets served eventually! 🎉
Edit PROCESSING_TIME_MS in server.js:
const PROCESSING_TIME_MS = 1000; // millisecondsEdit retry parameters in api/main.py:
MAX_RETRIES = 100 # Maximum retry attempts
RETRY_DELAY_MS = 100 # Milliseconds between retriesMIT