Skip to content

pyxle-framework/benchmarks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Pyxle Framework Benchmark Suite

Transparent, reproducible benchmarks comparing Pyxle against popular web frameworks.

Every framework implements identical API endpoints with the same business logic, database schema, and response format. You can verify this by reading the source code in each frameworks/ subdirectory.

Quick Start

./run.sh

This installs dependencies, starts all servers, runs the benchmark, and prints results. See Manual Setup below if you prefer running steps individually.

Frameworks Tested

Framework Language Category Server
Pyxle Python Full-stack (SSR + API) uvicorn (via pyxle serve)
FastAPI Python API uvicorn
Django Python Full-stack uvicorn (ASGI)
Flask Python Micro gunicorn (4 sync workers)
Express Node.js API built-in
Hono Node.js Ultralight API @hono/node-server

Test Endpoints

Each framework implements these identical endpoints:

Endpoint Method Description
/api/json GET Return a static JSON object (pure serialization overhead)
/api/db GET Read one random row from SQLite (framework + DB)
/api/queries?n=5 GET Read 5 random rows from SQLite (query loop)
/api/queries?n=20 GET Read 20 random rows (heavier workload)
/api/form POST Parse JSON body, validate, return response
/health GET Minimal health check (raw routing overhead)

All frameworks use the same SQLite database with 1,000 seeded rows (deterministic seed for reproducibility).

Manual Setup

Prerequisites

  • Node.js 18+ and npm
  • Python 3.10+ and pip
  • A Unix-like OS (macOS or Linux)

1. Install Dependencies

# Create a virtual environment
python3 -m venv .venv && source .venv/bin/activate

# Python frameworks
pip install pyxle-framework fastapi uvicorn django flask gunicorn

# Node.js frameworks + benchmark runner
cd frameworks/express && npm install && cd ../..
cd frameworks/hono && npm install && cd ../..
cd runner && npm install && cd ..

2. Build Pyxle

cd frameworks/pyxle && npm install && pyxle build && cd ../..

3. Start Servers

# Pyxle (port 8001)
cd frameworks/pyxle && pyxle serve --host 127.0.0.1 --port 8001 --skip-build &

# FastAPI (port 8002)
cd frameworks/fastapi && uvicorn main:app --host 127.0.0.1 --port 8002 &

# Django (port 8003)
cd frameworks/django && DJANGO_SETTINGS_MODULE=benchapp.settings \
  uvicorn benchapp.asgi:application --host 127.0.0.1 --port 8003 &

# Flask (port 8004)
cd frameworks/flask && gunicorn -w 4 -b 127.0.0.1:8004 app:app &

# Express (port 8005)
cd frameworks/express && PORT=8005 node app.mjs &

# Hono (port 8006)
cd frameworks/hono && PORT=8006 node app.mjs &

4. Run Benchmarks

cd runner && node bench.mjs

CLI Options

node bench.mjs [options]

  --duration=N          Seconds per test (default: 10)
  --connections=N,M,... Concurrency levels (default: 10,50)
  --only=fw1,fw2,...    Only test specified frameworks
  --tests=t1,t2,...     Only run specific tests (json,db,queries,queries20,form,health)
  --warmup=N            Warmup requests per endpoint (default: 3)
  --output=FILE         Save JSON results to file

Examples:

# Quick comparison of just Pyxle and FastAPI
node bench.mjs --only=pyxle,fastapi --duration=5

# Deep test with high concurrency
node bench.mjs --connections=10,50,100,200 --duration=15

# Only test JSON and form endpoints
node bench.mjs --tests=json,form

Methodology

  • Tool: autocannon v8 (HTTP/1.1 benchmarking)
  • Warmup: Each endpoint receives warmup requests before measurement
  • Duration: Configurable (default 10 seconds per test)
  • Concurrency: Configurable connection levels (default 10 and 50)
  • Database: SQLite with WAL mode, 1,000 pre-seeded rows, identical schema across all frameworks
  • Hardware: Results vary by machine; always compare on the same hardware

Fairness

  • All Python frameworks run on uvicorn (single worker, ASGI) except Flask which uses gunicorn (4 WSGI workers) since Flask is synchronous
  • All Node.js frameworks run on their default/recommended server
  • Pyxle runs via pyxle serve (production mode) with CSRF disabled for fair POST comparison
  • Each framework's code is idiomatic — not artificially optimized or handicapped
  • Response bodies are identical across all frameworks for each endpoint

Project Structure

benchmarks/
├── README.md
├── run.sh                 # One-command benchmark script
├── .gitignore
├── frameworks/
│   ├── pyxle/             # Full Pyxle app (file-based routing)
│   │   ├── pages/api/     # API routes (.py files)
│   │   ├── pyxle.config.json
│   │   └── package.json
│   ├── fastapi/           # FastAPI app (single file)
│   │   └── main.py
│   ├── django/            # Django project
│   │   ├── benchapp/      # Settings, views, urls, asgi
│   │   └── manage.py
│   ├── flask/             # Flask app (single file)
│   │   └── app.py
│   ├── express/           # Express.js app
│   │   ├── app.mjs
│   │   └── package.json
│   └── hono/              # Hono app
│       ├── app.mjs
│       └── package.json
├── runner/
│   ├── bench.mjs          # Benchmark runner (autocannon)
│   └── package.json
└── results/               # JSON results from benchmark runs

Contributing

To add a new framework:

  1. Create frameworks/<name>/ with the app code
  2. Implement all 6 endpoints with identical logic and response format
  3. Add the framework to the FRAMEWORKS registry in runner/bench.mjs
  4. Update this README

License

MIT

About

Transparent, reproducible benchmarks comparing [Pyxle](https://pyxle.dev) against popular web frameworks.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors