Skip to content

Akemid/api-limits-verifier

Repository files navigation

API Limits Verifier


Python 3.13+ License: MIT uv Code style: black Async: aiohttp Dashboard: Streamlit

API Limits Verifier Logo

Python script to verify HTTP endpoint throttling and rate limiting. Test that your API correctly enforces rate limits (e.g., 1000 requests/hour) with configurable load patterns.

🚀 New here? Check out the Quick Start Guide for a 2-minute hands-on tutorial.

🎯 Objective

Verify that an HTTP endpoint correctly implements rate limiting (e.g., 1000 requests/hour). The endpoint to test is configured in the .env file via the TARGET_URL variable.

🚀 Installation

Option 1: Using uv (Recommended)

uv is an extremely fast Python package manager. If you don't have it installed:

# Install uv (macOS/Linux)
curl -LsSf https://astral.sh/uv/install.sh | sh

# Or with brew
brew install uv

Then install the project:

# uv automatically creates the venv and syncs dependencies from pyproject.toml
uv sync

# Activate the virtual environment
source .venv/bin/activate  # Windows: .venv\Scripts\activate

Option 2: Using traditional pip

# Create virtual environment
python3 -m venv venv
source venv/bin/activate  # Windows: venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

⚙️ Configuration

The script uses a .env file to configure default values. You can customize these values by editing the .env file:

# Load Testing Configuration
TARGET_URL=https://api.example.com/endpoint
TOTAL_REQUESTS=50
MODE=burst              # Optional: burst, distributed, rafagas
BURST_SIZE=100
BURST_DELAY=5.0
DISTRIBUTED_DELAY=      # Optional: custom delay in seconds
OUTPUT_FILE=load_test_report.json
VERIFY_SSL=false

Note: Command-line arguments always take precedence over .env values.

📋 Usage

The script supports three operation modes:

1. BURST Mode (Full Burst)

Sends all requests as fast as possible. Ideal for verifying rate limiting.

python load_test.py --mode burst

Expected result: First 1000 requests should respond with HTTP 200, request #1001 should fail with HTTP 429 (Too Many Requests) or 503 (Service Unavailable).

2. DISTRIBUTED Mode

Distributes requests evenly over 1 hour. Simulates real usage.

python load_test.py --mode distributed

Duration: ~1 hour (3.6 seconds between each request)

3. RAFAGAS Mode (Bursts)

Sends requests in groups (bursts) with pauses between them. Useful for simulating intermittent traffic.

# Send in groups of 100 requests with 5 second pauses
python load_test.py --mode rafagas --burst-size 100 --burst-delay 5

# Send in groups of 50 requests with 10 second pauses
python load_test.py --mode rafagas --burst-size 50 --burst-delay 10

⚙️ Advanced Options

python load_test.py \
  --url https://api.example.com/endpoint \  # Custom URL
  --requests 2000 \                          # Number of requests
  --mode burst \                             # Send mode
  --burst-size 50 \                          # Burst size (rafagas mode)
  --burst-delay 3 \                          # Delay between bursts (rafagas mode)
  --output custom_report.json                # Output file

Parameters

Parameter Description Default (.env or fallback)
--url Endpoint URL to test TARGET_URL from .env
--requests Total number of requests TOTAL_REQUESTS from .env or 50
--mode Send mode: burst, distributed, rafagas MODE from .env or Required
--burst-size Requests per burst (rafagas mode only) BURST_SIZE from .env or 100
--burst-delay Seconds between bursts (rafagas mode only) BURST_DELAY from .env or 5.0
--distributed-delay Custom delay between requests (distributed mode) DISTRIBUTED_DELAY from .env or auto (1 hour)
--output JSON file with detailed report OUTPUT_FILE from .env or load_test_report.json
--verify-ssl Enable SSL verification VERIFY_SSL from .env or false

📊 Output

Terminal Output

🎯 Target: https://api.example.com/endpoint
📝 Mode: BURST
------------------------------------------------------------
🚀 Running in BURST mode: sending 1001 requests as fast as possible

  Completed 100/1001 requests...
  Completed 200/1001 requests...
  ...

============================================================
LOAD TEST RESULTS
============================================================
Total requests:      1001
Successful (200):    1000
Throttled (429/503): 1
Failed (other):      0
Total time:          12.34s

⚠️  First throttle occurred at request #1001
============================================================

Last 10 requests:
  ✅ Request #992: HTTP 200 (0.123s)
  ✅ Request #993: HTTP 200 (0.118s)
  ...
  ✅ Request #1000: HTTP 200 (0.125s)
  ❌ Request #1001: HTTP 429 (0.089s)

📄 Detailed report saved to: load_test_report.json

JSON Report

The script generates a JSON file with detailed information:

{
  "summary": {
    "total_requests": 1001,
    "successful_requests": 1000,
    "failed_requests": 0,
    "throttled_requests": 1,
    "first_throttle_at": 1001,
    "total_time": 12.34
  },
  "requests": [
    {
      "number": 1,
      "status": 200,
      "time": 0.123,
      "timestamp": "2024-03-25T10:30:00.123456",
      "error": null
    },
    ...
  ]
}

Dashboard Visualization

Launch the interactive Streamlit dashboard to explore results visually:

streamlit run dashboard.py

Home screen — upload or select your JSON report:

Dashboard Home

Analytics view — charts, metrics, and interactive request table:

Dashboard Analytics

🔍 Results Interpretation

✅ Throttling Working Correctly

  • First 1000 requests: HTTP 200
  • Request #1001: HTTP 429 or 503
  • first_throttle_at: 1001

⚠️ Potential Issues

Symptom Possible Cause
All requests successful (no throttling) Rate limit not configured or limit higher than 1001
Throttling before request 1000 Rate limit stricter than expected
All requests fail Endpoint down, incorrect URL, or authentication required

💡 Use Cases

Verify Specific Rate Limit

# Test 500 requests/hour limit
python load_test.py --mode burst --requests 501

Simulate Real Load Over 1 Hour

# 1000 requests distributed evenly
python load_test.py --mode distributed --requests 1000

Test Burst Tolerance

# See how many requests it handles in small bursts
python load_test.py --mode rafagas --burst-size 20 --burst-delay 10 --requests 1001

🛠️ Troubleshooting

Error: ModuleNotFoundError: No module named 'aiohttp'

pip install -r requirements.txt

Connection Error

Verify that the endpoint configured in .env is accessible:

# Verify the configured endpoint (replace with your TARGET_URL)
curl -I https://api.example.com/endpoint

Very Slow Requests

  • distributed mode takes ~1 hour by design
  • Use burst mode for quick tests
  • Check internet connection

📝 Technical Notes

  • Async: Uses aiohttp + asyncio for efficient concurrent requests
  • Rate Limiting: Detects HTTP 429 (Too Many Requests) and 503 (Service Unavailable)
  • Timeout: Each request has automatic aiohttp timeout
  • Logging: Shows progress every 100 requests
  • Configuration: Uses python-dotenv to load environment variables from .env

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

About

Python script to verify HTTP endpoint throttling and rate limiting. Test that your API correctly enforces rate limits (e.g., 1000 requests/hour) with configurable load patterns.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages