Warning
This is a research-level hobby project in alpha. It is not suitable for production use or any serious application.
A Rust/Axum web server with Flask-style dynamic Python routing via PyO3.
You add this to your own Axum server to enable Python endpoints:
use axum::{routing::any, Router};
use snaxum::prelude::*;
use std::sync::Arc;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Initialize Python
pyo3::Python::initialize();
// Configure the Python runtime
let config = SnaxumConfig::builder()
.python_dir("./python")
.module("endpoints")
.module("pool_handlers")
.pool_workers(4)
.dispatch_workers(4)
.build()?;
let runtime = Arc::new(PythonRuntime::with_config(config)?);
let app = Router::new()
.route("/python/{*path}", any(handle_python_request))
.with_state(runtime);
let listener = tokio::net::TcpListener::bind("0.0.0.0:3000").await?;
axum::serve(listener, app).await?;
Ok(())
}Now you can add python endpoints in files in ./python/:
# ./python/endpoints.py
import sys
import polars as pl
from snaxum import route, Request
@route('/python/hello', methods=['GET'])
def hello(request: Request) -> dict:
"""Return a greeting with Python version info."""
return {
"message": "Hello from Python",
"version": f"{sys.version_info.major}.{sys.version_info.minor}.{sys.version_info.micro}"
}
@route('/python/users/<int:user_id>', methods=['GET'])
def get_user(request: Request) -> dict:
"""Get a user by ID - demonstrates path parameter extraction."""
user_id = request.path_params['user_id'] # Already converted to int
return {"user_id": user_id, "name": f"User {user_id}"}
def compute_squares(numbers: list[int]) -> list[int]:
return [n * n for n in numbers]
@route('/python/pool/compute', methods=['POST'], use_process_pool=True)
def pool_compute(request: Request, pool: ProcessPoolExecutor) -> dict[str, int]:
"""Compute squares of numbers using the process pool (POST with body)."""
numbers = body.get('numbers', [])
future = pool.submit(compute_squares, numbers)
return {"squares": future.result()}Async handlers are also supported - just use async def:
# ./python/async_endpoints.py
import asyncio
from snaxum import route, Request
@route('/python/async/hello', methods=['GET'])
async def async_hello(request: Request) -> dict:
"""Async handlers run in a dedicated asyncio event loop."""
return {"message": "Hello from async Python!"}
@route('/python/async/sleep', methods=['GET'])
async def async_sleep(request: Request) -> dict:
"""Non-blocking sleep - doesn't block other requests."""
duration = float(request.query_params.get('duration', '1.0'))
await asyncio.sleep(duration)
return {"slept": duration}
@route('/python/async/concurrent', methods=['GET'])
async def async_concurrent(request: Request) -> dict:
"""Run multiple operations concurrently."""
async def fetch(name: str) -> dict:
await asyncio.sleep(1.0)
return {"name": name}
# All three run concurrently - total time ~1s, not 3s
results = await asyncio.gather(
fetch("a"), fetch("b"), fetch("c")
)
return {"results": list(results)}
@route('/python/async/compute', methods=['POST'], use_process_pool=True)
async def async_compute(request: Request, pool: ProcessPoolExecutor) -> dict:
"""CPU-bound work in async handler using process pool."""
loop = asyncio.get_event_loop()
result = await loop.run_in_executor(pool, compute_squares, request.body)
return {"result": result}- See the
example/directory for a full working example. - Routes can go in any python file in
./python/ - All Python endpoints are registered dynamically at runtime via the
@routedecorator. - Virtual environments are supported for dependency management.
The goal is to allow developers to write HTTP endpoints either in Rust or Python, for the Axum webserver:
- Rust endpoints are compiled, high-performance, and type-safe
- Python endpoints are dynamic, easy to write, and support rich libraries
In practice, Rust-focused developers are likely to use the Rust endpoints, and Python developers the Python endpoints. Having both allows a pathway for easy and rapid experimentation in Python, with the option to later port performance-critical endpoints to Rust. A very typical use-case is that researchers and data scientists want to quickly prototype web APIs in Python, using libraries like Pandas, NumPy, or machine learning frameworks. If an endpoint becomes performance-critical, it can be re-written in Rust later, by Rust experts, while having access to the working reference implementation in Python right there in the same server codebase.
Because the server itself, the entrypoint, is written in Rust, the performance ceiling is much higher than a pure Python server where one might try to improve performance by adding native extensions in Rust, but will discover that the performance ceiling is low due to Python's inherent performance limitations.
This POC is demonstrating an architecture that aims to combine the best of both worlds.
snaxum/
├── src/
│ ├── main.rs # Server setup, signal handling
│ ├── rust_handlers.rs # Pure Rust endpoints
│ ├── python_runtime.rs # Python thread with channel communication
│ └── dispatcher.rs # Catch-all handler for /python/*
├── python/
│ ├── snaxum.py # Framework: @route decorator, Request, dispatch
│ ├── endpoints.py # In-thread Python handlers
│ ├── pool_handlers.py # Process pool handlers
│ └── pool_workers.py # Process pool worker functions
├── Cargo.toml
└── pyproject.toml
- Rust (via rustup)
- Python 3.10+ (system install with shared library in standard path)
- uv (Python package manager):
curl -LsSf https://astral.sh/uv/install.sh | sh - just (command runner): Available via your package manager
# Create venv and install dependencies
just setup
# Start the server
just serve
# Run tests (in another terminal, while server is running)
just test-allThis project uses system Python for the venv (required for PyO3 runtime linking) and uv for package management:
- Create venv:
just venv(uses/usr/bin/python3 -m venv) - Sync dependencies:
just sync(oruv sync) - Lock dependencies:
just lock(oruv lock)
Why system Python? PyO3 needs
libpythonin a standard library path at runtime. System Python's libraries are in/usr/lib64/, which is already in the linker search path. uv-managed Python installations keep their libraries in non-standard locations.
The just serve command automatically configures:
PYO3_PYTHON- points PyO3 to the venv's Python interpreterPYTHONPATH- adds the venv's site-packages so embedded Python can import installed packages
- Add the dependency to
pyproject.tomlunder[project.dependencies] - Run
just lockto updateuv.lock - Run
just syncto install
For production, the deployment system must:
- Create a Python 3.10+ virtual environment (with system Python for libpython access)
- Install dependencies from
pyproject.toml(or useuv.lockfor reproducible builds) - Set
PYO3_PYTHONto point to the venv's Python interpreter - Set
PYTHONPATHto the venv's site-packages directory
Example Dockerfile pattern:
# Use system Python for venv (libpython must be in standard path)
RUN python3 -m venv /app/.venv
RUN uv sync --frozen
ENV PYO3_PYTHON=/app/.venv/bin/python
ENV PYTHONPATH=/app/.venv/lib/python3.12/site-packagesThe route handlers written in Rust work as normal Axum async handlers. The Python route handlers are all registered dynamically at runtime via decorators in Python code. The following diagram illustrates the request flow for Python endpoints:
HTTP Request → Axum catch-all route (/python/*path)
→ Generic Dispatcher → Channel → Python Runtime Thread
→ snaxum.dispatch(method, path, request_data)
→ Route Registry path matching → User Handler → Response
Key Principle: Fully dynamic. Rust knows nothing about individual routes - just catches all /python/* requests and delegates to Python for path matching and dispatch.
A dedicated Rust thread owns the Python GIL and ProcessPoolExecutor:
- Initializes Python and adds
python/tosys.path - Imports
snaxumframework - Imports user modules (registers routes via
@routedecorators) - Creates
ProcessPoolExecutorwith worker initializer - Logs all registered routes at startup
- Loops on channel, calling
snaxum.dispatch()for each request
Communication uses std::sync::mpsc for thread-side and tokio::sync::oneshot for async responses.
@route('/python/hello', methods=['GET'])
def hello(request: Request) -> dict:
return {"message": "Hello"}- Handler runs in the Python runtime thread
- GIL held for duration of request
- Use for: quick computations, I/O-bound Python code
@route('/python/compute', methods=['POST'], use_process_pool=True)
def compute(request: Request, pool: ProcessPoolExecutor) -> dict:
future = pool.submit(heavy_work, request.body)
return {"result": future.result()}- Handler receives
ProcessPoolExecutoras second argument - True parallelism across cores via separate processes
- Workers ignore SIGINT (main process handles shutdown)
- Use for: CPU-bound work (numpy, pandas, ML inference)
@route('/python/async/io', methods=['GET'])
async def async_io(request: Request) -> dict:
await asyncio.sleep(1.0) # Non-blocking
return {"waited": 1.0}
@route('/python/async/compute', methods=['POST'], use_process_pool=True)
async def async_compute(request: Request, pool: ProcessPoolExecutor) -> dict:
loop = asyncio.get_event_loop()
result = await loop.run_in_executor(pool, heavy_work, request.body)
return {"result": result}- Handlers use
async def- automatically detected and routed to async runtime - Runs in dedicated asyncio event loop thread
- Supports thousands of concurrent requests without blocking workers
- Use
use_process_pool=True+run_in_executor()for CPU-bound work in async handlers - Use for: high-latency I/O, concurrent operations, websocket-style patterns
@route('/python/users/<int:user_id>', methods=['GET'])
def get_user(request: Request) -> dict:
user_id = request.path_params['user_id'] # Already an int
return {"user_id": user_id}Supported types: <int:name>, <float:name>, <name> (string)
class Request:
path_params: Dict[str, Any] # Extracted from path
query_params: Dict[str, str] # ?foo=bar
headers: Dict[str, str] # HTTP headers
body: Optional[Any] # Parsed JSON body
method: str # GET, POST, etc.
path: str # Full request path- SIGINT/SIGTERM received by Rust signal handler
- Axum graceful shutdown drains connections
RuntimeMessage::Shutdownsent to Python threadProcessPoolExecutor.shutdown(wait=True, cancel_futures=True)- Python thread exits, joined by main thread
- Clean exit
See the example/python/endpoints.py and example/python/pool_handlers.py for full examples.
No rebuild required - just add a decorated function and restart:
# python/endpoints.py
@route('/python/new-feature', methods=['GET', 'POST'])
def new_feature(request: Request) -> dict:
return {"status": "works"}The architecture guarantees that Python running in its dedicated thread will never interfere with Rust endpoint execution:
| Component | Execution Context |
|---|---|
| Rust endpoints | Tokio async task executor (event loop) |
| Python handlers | Dedicated OS thread with GIL |
These are completely independent. Rust async tasks and the Python thread share no execution resources.
Python's Global Interpreter Lock only affects Python threads. The Rust async code runs on separate OS threads managed by Tokio and is never blocked waiting for the GIL:
Python::attach()binds Python context to the dedicated thread only- Tokio tasks continue executing while Python holds the GIL
- No GIL acquisition happens in Rust endpoint handlers
| Rust Endpoints | Python Endpoints |
|---|---|
| Access no Python state | Access no Rust handler state |
| Pure async functions | Isolated via channel message passing |
| Independent request/response | Each request gets own oneshot channel |
Python requests flow through thread-safe channels:
HTTP Request → Tokio task → mpsc::Sender → Python thread → oneshot::Sender → Tokio task → HTTP Response
mpsc::SenderisClone + Send- safe to share across async tasksoneshot::Receiveris awaited asynchronously - doesn't block the event loop- Each request is independent; no shared state in the dispatch path
Signal conflicts are explicitly avoided (main.rs:17-23):
- Python's default SIGINT handler is disabled
- Rust handles SIGINT/SIGTERM for graceful shutdown
- ProcessPoolExecutor workers ignore SIGINT
| Property | Guarantee |
|---|---|
| Rust endpoints blocked by Python | No - different execution contexts |
| GIL contention with Rust | No - GIL is thread-local |
| Shared state race conditions | No - no shared mutable state |
| Signal handler conflicts | No - explicitly managed |
| Process pool GIL issues | No - separate processes, not threads |
- axum 0.8: HTTP routing and server
- pyo3 0.27: Rust-Python bindings with
auto-initialize - tokio: Async runtime with channels for Python communication