Resources for serving models in production
-
Updated
Sep 25, 2019 - Python
Resources for serving models in production
Heterogeneous System ML Pipeline Scheduling Framework with Triton Inference Server as Backend
Big ML Project with infrastructure (MLflow, Minio, Grafana), backend (FastAPI, Catboost) and frontend (React, Maplibre)
🌐 Language identification for Scandinavian languages
Serving large ml models independently and asynchronously via message queue and kv-storage for communication with other services [EXPERIMENT]
Collection of OSS models that are containerized into a serving container
Add a description, image, and links to the ml-serving topic page so that developers can more easily learn about it.
To associate your repository with the ml-serving topic, visit your repo's landing page and select "manage topics."