Skip to content

MilanJa/queue

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Queue Server

    🚦 Queue Server 🚦
    ╔═══════════════════════════════════════╗
    ║  Request 1  →  [🔒 Processing...]     ║
    ║  Request 2  →  [⏳ Waiting...]        ║
    ║  Request 3  →  [⏳ Waiting...]        ║
    ║  Request 4  →  [⏳ Waiting...]        ║
    ╚═══════════════════════════════════════╝
         ID-based locking in action!

A demonstration project with two servers implementing ID-based resource locking:

  • Express Server: Locks resources per ID, returning 423 (Locked) for concurrent requests
  • FastAPI Server: Proxies requests to Express with automatic retry logic for seamless queueing

Requirements

  • Windows 10/11 (for setup.bat)
  • Python 3.8+ (for FastAPI server)
  • Node.js (will be installed by setup script if missing)
  • oha (HTTP load testing tool, will be installed by setup script if missing)

Quick Start

Automated Setup (Recommended)

Run the setup script which will install all dependencies:

.\setup.bat

This will:

  1. Check for Node.js and install it via winget if missing
  2. Check for oha and install it via winget if missing
  3. Install npm packages (Express, Morgan)
  4. Display next steps

Note: If Node.js or oha is installed, you'll need to restart your terminal and run setup.bat again.

Manual Setup

If you prefer to install manually:

  1. Install Node.js dependencies:

    npm install
  2. Install Python dependencies:

    npm run fastapi:install

    Or manually:

    cd api
    pip install -r requirements.txt

Running the Servers

Express Server

The Express server implements ID-based locking with normally distributed processing times.

npm run express:start

Server runs on http://localhost:3000

Endpoint: GET /lock?id={id}

  • Returns 200 with success message after processing (~1 second with variance)
  • Returns 423 (Locked) if the same ID is already being processed
  • Returns 400 if id parameter is missing

FastAPI Server

The FastAPI server proxies requests to Express with automatic retry logic.

npm run fastapi:start

Server runs on http://localhost:8000

Endpoint: GET /request/{id}

  • Automatically retries when Express returns 423 (Locked)
  • Returns 200 once the lock is acquired and processing completes
  • Returns 504 if max retries exceeded
  • Returns 503 if Express server is unavailable

Load Testing

The project includes several load testing scripts using oha:

Express Server Tests

# Default load test (1000 requests, 50 concurrent, IDs 1-5)
npm run express:load

FastAPI Server Tests

# Default test
npm run fastapi:load

# Light load (100 requests, 10 concurrent)
npm run fastapi:load:light

# Heavy load (1000 requests, 100 concurrent, IDs 1-5 - high contention)
npm run fastapi:load:heavy

# Single ID test (maximum lock contention)
npm run fastapi:load:single-id

# Many IDs (IDs 10-99, less contention)
npm run fastapi:load:many-ids

# Extreme test (10000 requests, 200 concurrent)
npm run fastapi:load:extreme

Custom Load Tests

All load test scripts support additional oha parameters:

# Custom number of requests
npm run fastapi:load -- -n 5000

# Custom requests and connections
npm run fastapi:load -- -n 2000 -c 100

# Different duration
npm run fastapi:load -- -z 30s

Architecture

Express Server (server.js)

  • Uses in-memory Map to track locks per ID
  • Processing time follows normal distribution (mean: 1000ms, σ: ~167ms)
  • Logs all requests with HTTP status codes via Morgan middleware
  • Returns 423 immediately if resource is locked

FastAPI Server (api/main.py)

  • Makes HTTP requests to Express server via httpx
  • Implements retry logic with configurable parameters:
    • MAX_RETRIES: 100 attempts
    • RETRY_DELAY_MS: 100ms between retries
    • Timeout: 30 seconds
  • Queues requests transparently by retrying on 423 responses

Project Structure

queue/
├── server.js           # Express server with locking
├── package.json        # npm scripts and dependencies
├── setup.bat          # Automated Windows setup script
├── api/
│   ├── main.py        # FastAPI proxy server
│   ├── requirements.txt
│   └── README.md
└── README.md

Testing Examples

Test Express Locking

# Terminal 1: Start Express
npm run express:start

# Terminal 2: Run load test
npm run express:load

Expected: Many 423 responses due to lock contention.

   ┌─────────┐
   │Request 1│ → 🔒 Lock acquired → ✅ 200 OK (after ~1s)
   └─────────┘

   ┌─────────┐
   │Request 2│ → ⛔ Lock held → ❌ 423 Locked (immediate)
   └─────────┘

   ┌─────────┐
   │Request 3│ → ⛔ Lock held → ❌ 423 Locked (immediate)
   └─────────┘

Test FastAPI Retry Logic

# Terminal 1: Start Express
npm run express:start

# Terminal 2: Start FastAPI
npm run fastapi:start

# Terminal 3: Run load test
npm run fastapi:load

Expected: 100% success rate with FastAPI handling retries automatically.

   ┌─────────┐
   │Request 1│ → FastAPI → Express 🔒 → ✅ 200 OK
   └─────────┘              ↓
                          (1 sec)

   ┌─────────┐              ↓
   │Request 2│ → FastAPI → Express ⛔ 423
   └─────────┘       ↓       ↑
                   retry   retry  retry
                     ↓       ↓      ↓
                   wait... wait... 🔒 → ✅ 200 OK

   Everyone gets served eventually! 🎉

Configuration

Express Processing Time

Edit PROCESSING_TIME_MS in server.js:

const PROCESSING_TIME_MS = 1000; // milliseconds

FastAPI Retry Behavior

Edit retry parameters in api/main.py:

MAX_RETRIES = 100        # Maximum retry attempts
RETRY_DELAY_MS = 100     # Milliseconds between retries

License

MIT

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors