A FastAPI-based LLM serving layer with pluggable inference backends, custom routing, and GCP deployment support.
gcp pytorch inference-engine fastapi gcp-cloud-run vertex-ai vllm llm-inference tensorrt-llm multiple-llm
-
Updated
Apr 10, 2026 - Python