Skip to content

barakplasma/FastGRPC

Repository files navigation

FastGRPC

CI

Add gRPC to your FastAPI app in one line. No protobuf. No config. ~2× faster than HTTP/JSON.

The Pitch

You already have a FastAPI app. FastGRPC reads it at startup and gives you a fully working gRPC server — same handlers, same Pydantic models, zero new files to write or maintain.

from fastgrpc import FastGRPC

# Your existing FastAPI app, unchanged
app = FastAPI()

@app.get("/items/{item_id}", response_model=Item)
async def get_item(item_id: int) -> Item:
    ...

# One line to add gRPC on port 50051
FastGRPC(app, grpc_port=50051)

That's the entire migration.

Performance

Benchmarked on the same FastAPI app, same handler logic, 30,000 requests over ~1 minute with persistent connections (how production clients work):

Scenario HTTP/JSON gRPC Speedup
Small payload (3 fields) 5.38 ms 1.18 ms 4.6×
Large payload (~50 fields, nested) 6.12 ms 1.37 ms 4.5×

aarch64, Python 3.13, httpx keep-alive vs grpc.aio persistent channel

Why gRPC is faster:

  • Binary encoding — protobuf is more compact and faster to parse than JSON, especially for numbers and nested objects
  • HTTP/2 multiplexing — one persistent TCP connection handles many concurrent requests with no head-of-line blocking
  • No header overhead — HPACK compression; HTTP/1.1 resends headers verbatim every request

Gotcha: grpcurl and curl spawn a new OS process per call, adding ~30–40 ms of overhead. Always benchmark with persistent clients to see true protocol performance.

Installation

uv add git+https://github.com/barakplasma/FastGRPC.git
# or
pip install git+https://github.com/barakplasma/FastGRPC.git

Requires Python 3.10+. Installs grpcio, grpcio-tools, and grpcio-reflection automatically.

Quick Start

from fastapi import FastAPI
from pydantic import BaseModel
from fastgrpc import FastGRPC

app = FastAPI()

class Item(BaseModel):
    id: int
    name: str
    price: float

class ItemCreate(BaseModel):
    name: str
    price: float

@app.post("/items/", response_model=Item)
async def create_item(item: ItemCreate) -> Item:
    return Item(id=1, **item.model_dump())

@app.get("/items/{item_id}", response_model=Item)
async def get_item(item_id: int) -> Item:
    return Item(id=item_id, name="Example", price=9.99)

# Add gRPC — hooks into FastAPI startup/shutdown automatically
FastGRPC(app, grpc_port=50051, enable_reflection=True)

Run it:

uvicorn main:app  # HTTP on :8000, gRPC on :50051

How It Works

FastGRPC runs at startup and does four things automatically:

app.openapi()  →  OpenAPI JSON  →  .proto file  →  compiled stubs  →  grpc.aio server
  1. Reads your existing OpenAPI spec (already generated by FastAPI)
  2. Converts it to a .proto definition — no manual protobuf writing
  3. Compiles Python stubs via grpcio-tools (cached by schema hash, only recompiles on changes)
  4. Starts a grpc.aio server that delegates all requests to your existing route handlers

Your HTTP and gRPC servers run side by side. No code duplication.

API

FastGRPC(
    app,                        # FastAPI app instance
    grpc_port=50051,            # gRPC server port
    service_name="FastAPI",     # Proto service name
    cache_dir=None,             # Cache directory (default: .fastgrpc_cache/)
    enable_reflection=False,    # Enable gRPC reflection for grpcurl discovery
)

Route → RPC Mapping

HTTP gRPC RPC Request message Response message
POST /items/ CreateItems CreateItemsRequest CreateItemsResponse
GET /items/{item_id} GetItems GetItemsRequest GetItemsResponse
PUT /items/{item_id} UpdateItems UpdateItemsRequest UpdateItemsResponse
DELETE /items/{item_id} DeleteItems DeleteItemsRequest DeleteItemsResponse

Export Proto for Other Languages

grpc = FastGRPC(app, grpc_port=50051)

# Get the .proto definition — use it to generate Go, Rust, Java clients
proto = grpc.export_proto("service.proto")
protoc --go_out=. --go-grpc_out=. service.proto

Testing with grpcurl

Enable reflection when starting your server:

FastGRPC(app, grpc_port=50051, enable_reflection=True)

Then discover and call methods:

# List services
grpcurl -plaintext localhost:50051 list
# myapp.MyService
# grpc.reflection.v1alpha.ServerReflection

# List methods
grpcurl -plaintext localhost:50051 list myapp.MyService
# myapp.MyService.GetItemItemsItemIdGet
# myapp.MyService.CreateItemItemsPost

# Call a method
grpcurl -plaintext \
  -d '{"item_id": 1}' \
  localhost:50051 \
  myapp.MyService/GetItemItemsItemIdGet

Method name shape: FastAPI's operationId PascalCased — {function_name}_{path_slug}_{http_method}. So get_item at GET /items/{item_id} becomes GetItemItemsItemIdGet. Use grpcurl list <service> to see exact names.

Without reflection, pass the proto file directly:

# Proto is cached at .fastgrpc_cache/<hash>/<name>.proto after first startup
grpcurl -plaintext \
  -proto .fastgrpc_cache/<hash>/myapp.proto \
  -d '{"item_id": 1}' \
  localhost:50051 \
  myapp.MyService/GetItemItemsItemIdGet

Run the Benchmarks

# Starts both servers and runs the full comparison automatically (~1 minute)
uv run python benchmarks/bench_client.py --runs 30000 --warmup 500

What's Included

  • Unary RPCs for GET, POST, PUT, DELETE, PATCH
  • Pydantic request/response models (including nested models)
  • Path and query parameters
  • Automatic startup/shutdown via FastAPI lifecycle
  • Schema-hash-based stub caching (recompiles only on changes)
  • Optional gRPC reflection

Not in V1: streaming RPCs, auth interceptors, WebSocket routes.

Dependencies

  • grpcio — async gRPC server (grpc.aio)
  • grpcio-tools — proto compiler
  • grpcio-reflection — optional reflection support
  • protobuf — proto message runtime
  • fastapi — peer dependency

License

MIT

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages