Transparent, reproducible benchmarks comparing Pyxle against popular web frameworks.
Every framework implements identical API endpoints with the same business logic, database schema, and response format. You can verify this by reading the source code in each frameworks/ subdirectory.
./run.shThis installs dependencies, starts all servers, runs the benchmark, and prints results. See Manual Setup below if you prefer running steps individually.
| Framework | Language | Category | Server |
|---|---|---|---|
| Pyxle | Python | Full-stack (SSR + API) | uvicorn (via pyxle serve) |
| FastAPI | Python | API | uvicorn |
| Django | Python | Full-stack | uvicorn (ASGI) |
| Flask | Python | Micro | gunicorn (4 sync workers) |
| Express | Node.js | API | built-in |
| Hono | Node.js | Ultralight API | @hono/node-server |
Each framework implements these identical endpoints:
| Endpoint | Method | Description |
|---|---|---|
/api/json |
GET | Return a static JSON object (pure serialization overhead) |
/api/db |
GET | Read one random row from SQLite (framework + DB) |
/api/queries?n=5 |
GET | Read 5 random rows from SQLite (query loop) |
/api/queries?n=20 |
GET | Read 20 random rows (heavier workload) |
/api/form |
POST | Parse JSON body, validate, return response |
/health |
GET | Minimal health check (raw routing overhead) |
All frameworks use the same SQLite database with 1,000 seeded rows (deterministic seed for reproducibility).
- Node.js 18+ and npm
- Python 3.10+ and pip
- A Unix-like OS (macOS or Linux)
# Create a virtual environment
python3 -m venv .venv && source .venv/bin/activate
# Python frameworks
pip install pyxle-framework fastapi uvicorn django flask gunicorn
# Node.js frameworks + benchmark runner
cd frameworks/express && npm install && cd ../..
cd frameworks/hono && npm install && cd ../..
cd runner && npm install && cd ..cd frameworks/pyxle && npm install && pyxle build && cd ../..# Pyxle (port 8001)
cd frameworks/pyxle && pyxle serve --host 127.0.0.1 --port 8001 --skip-build &
# FastAPI (port 8002)
cd frameworks/fastapi && uvicorn main:app --host 127.0.0.1 --port 8002 &
# Django (port 8003)
cd frameworks/django && DJANGO_SETTINGS_MODULE=benchapp.settings \
uvicorn benchapp.asgi:application --host 127.0.0.1 --port 8003 &
# Flask (port 8004)
cd frameworks/flask && gunicorn -w 4 -b 127.0.0.1:8004 app:app &
# Express (port 8005)
cd frameworks/express && PORT=8005 node app.mjs &
# Hono (port 8006)
cd frameworks/hono && PORT=8006 node app.mjs &cd runner && node bench.mjsnode bench.mjs [options]
--duration=N Seconds per test (default: 10)
--connections=N,M,... Concurrency levels (default: 10,50)
--only=fw1,fw2,... Only test specified frameworks
--tests=t1,t2,... Only run specific tests (json,db,queries,queries20,form,health)
--warmup=N Warmup requests per endpoint (default: 3)
--output=FILE Save JSON results to file
Examples:
# Quick comparison of just Pyxle and FastAPI
node bench.mjs --only=pyxle,fastapi --duration=5
# Deep test with high concurrency
node bench.mjs --connections=10,50,100,200 --duration=15
# Only test JSON and form endpoints
node bench.mjs --tests=json,form- Tool: autocannon v8 (HTTP/1.1 benchmarking)
- Warmup: Each endpoint receives warmup requests before measurement
- Duration: Configurable (default 10 seconds per test)
- Concurrency: Configurable connection levels (default 10 and 50)
- Database: SQLite with WAL mode, 1,000 pre-seeded rows, identical schema across all frameworks
- Hardware: Results vary by machine; always compare on the same hardware
- All Python frameworks run on uvicorn (single worker, ASGI) except Flask which uses gunicorn (4 WSGI workers) since Flask is synchronous
- All Node.js frameworks run on their default/recommended server
- Pyxle runs via
pyxle serve(production mode) with CSRF disabled for fair POST comparison - Each framework's code is idiomatic — not artificially optimized or handicapped
- Response bodies are identical across all frameworks for each endpoint
benchmarks/
├── README.md
├── run.sh # One-command benchmark script
├── .gitignore
├── frameworks/
│ ├── pyxle/ # Full Pyxle app (file-based routing)
│ │ ├── pages/api/ # API routes (.py files)
│ │ ├── pyxle.config.json
│ │ └── package.json
│ ├── fastapi/ # FastAPI app (single file)
│ │ └── main.py
│ ├── django/ # Django project
│ │ ├── benchapp/ # Settings, views, urls, asgi
│ │ └── manage.py
│ ├── flask/ # Flask app (single file)
│ │ └── app.py
│ ├── express/ # Express.js app
│ │ ├── app.mjs
│ │ └── package.json
│ └── hono/ # Hono app
│ ├── app.mjs
│ └── package.json
├── runner/
│ ├── bench.mjs # Benchmark runner (autocannon)
│ └── package.json
└── results/ # JSON results from benchmark runs
To add a new framework:
- Create
frameworks/<name>/with the app code - Implement all 6 endpoints with identical logic and response format
- Add the framework to the
FRAMEWORKSregistry inrunner/bench.mjs - Update this README
MIT