Deployment of TensorFlow models into production with TensorFlow Serving, Docker, Kubernetes and Microsoft Azure
-
Updated
Dec 6, 2018 - Python
Deployment of TensorFlow models into production with TensorFlow Serving, Docker, Kubernetes and Microsoft Azure
Tutorial on serving LLMs via vllm in docker containers on kubernetes clusters
End to End Text Classifaction MLOps Project using Tekton Pipelines
Template for a simple API to have a model serving in production.
A proof-of-concept on how to install and use Torchserve in various mode
A Simple way to deploy your tensorflow.keras model using Flask
A simple, consolidated, extensible gRPC-based client implementation for querying TensorFlow Model Server.
Docker-based Machine Learning models serving
Simple TensorFlow Estimator 1.x example with Serving API.
Basic example of Tensorflow Serving
Flask image grabberis a modern and lightweight Flask application that leverages the ipapi API to retrieve geolocation information based on the user's IP address. It also serves an image file specified in the URL.
Decopled serving stack using FastAPI, Kafka, and MongoDB - Example
Merging Models for TensorFlow Serving HOT UPDATING
Deployment template for serving a ML model as web-service with a REST API.
Add a description, image, and links to the serving topic page so that developers can more easily learn about it.
To associate your repository with the serving topic, visit your repo's landing page and select "manage topics."