AI Inference Operator for Kubernetes. The easiest way to serve ML models in production. Supports VLMs, LLMs, embeddings, and speech-to-text.
-
Updated
Mar 13, 2025 - Go
AI Inference Operator for Kubernetes. The easiest way to serve ML models in production. Supports VLMs, LLMs, embeddings, and speech-to-text.
This Repository contains terraform configuration for vllm production-stack in the cloud managed K8s
Add a description, image, and links to the vllm-operator topic page so that developers can more easily learn about it.
To associate your repository with the vllm-operator topic, visit your repo's landing page and select "manage topics."