Deployment of TensorFlow models into production with TensorFlow Serving, Docker, Kubernetes and Microsoft Azure
-
Updated
Dec 6, 2018 - Python
Deployment of TensorFlow models into production with TensorFlow Serving, Docker, Kubernetes and Microsoft Azure
Tutorial on serving LLMs via vllm in docker containers on kubernetes clusters
Basic example of Tensorflow Serving
Template for a simple API to have a model serving in production.
A proof-of-concept on how to install and use Torchserve in various mode
Decopled serving stack using FastAPI, Kafka, and MongoDB - Example
A Simple way to deploy your tensorflow.keras model using Flask
AsyncIO serving for data science models
Simple TensorFlow Estimator 1.x example with Serving API.
End to End Text Classifaction MLOps Project using Tekton Pipelines
An object oriented (OOP) approach to train Tensorflow models and serve them using Tensorflow Serving.
Docker-based Machine Learning models serving
Recipes for reproducing training and serving benchmarks for large machine learning models using GPUs on Google Cloud.
📦 Automated dataset management for ML using Docker containers on AWS
Python wrapper class for OpenVINO Model Server. User can submit inference request to OVMS with just a few lines of code.
A simple, consolidated, extensible gRPC-based client implementation for querying TensorFlow Model Server.
Add a description, image, and links to the serving topic page so that developers can more easily learn about it.
To associate your repository with the serving topic, visit your repo's landing page and select "manage topics."