Deployment of TensorFlow models into production with TensorFlow Serving, Docker, Kubernetes and Microsoft Azure
-
Updated
Dec 6, 2018 - Python
Deployment of TensorFlow models into production with TensorFlow Serving, Docker, Kubernetes and Microsoft Azure
Tutorial on serving LLMs via vllm in docker containers on kubernetes clusters
Basic example of Tensorflow Serving
AsyncIO serving for data science models
Simple TensorFlow Estimator 1.x example with Serving API.
End to End Text Classifaction MLOps Project using Tekton Pipelines
An object oriented (OOP) approach to train Tensorflow models and serve them using Tensorflow Serving.
Docker-based Machine Learning models serving
Template for a simple API to have a model serving in production.
Decopled serving stack using FastAPI, Kafka, and MongoDB - Example
A proof-of-concept on how to install and use Torchserve in various mode
A Simple way to deploy your tensorflow.keras model using Flask
Python wrapper class for OpenVINO Model Server. User can submit inference request to OVMS with just a few lines of code.
A simple, consolidated, extensible gRPC-based client implementation for querying TensorFlow Model Server.
A kedro-plugin to serve Kedro Pipelines as API
Add a description, image, and links to the serving topic page so that developers can more easily learn about it.
To associate your repository with the serving topic, visit your repo's landing page and select "manage topics."