A flexible, high-performance serving system for machine learning models
-
Updated
Mar 31, 2018 - C++
A flexible, high-performance serving system for machine learning models
This is base project of bentoml and machine learning model with poetry env
Deployment of TensorFlow models into production with TensorFlow Serving, Docker, Kubernetes and Microsoft Azure
Template for a simple API to have a model serving in production.
A proof-of-concept on how to install and use Torchserve in various mode
End to End Text Classifaction MLOps Project using Tekton Pipelines
Demonstrating how to build an XGBoost model and deploy it to Algorithmia, from a Jupyter notebook
A Simple way to deploy your tensorflow.keras model using Flask
A flexible, high-performance serving system for machine learning models
something about JetsonTX2 and the appliaction of model-serving (Tensorflow) on it
A simple, consolidated, extensible gRPC-based client implementation for querying TensorFlow Model Server.
This repository addresses the complex task of multi-class prediction for cirrhosis outcomes, a chronic liver condition characterized by tissue damage. Focused on forecasting diverse outcomes such as severity levels and disease stages, the model holds significance in enhancing personalized healthcare for cirrhosis patients.
Beginner friendly starting point for Tensorflow Serving and Docker
Docker-based Machine Learning models serving
it reads files & suspends them in memory for performant serving/access
Experimental standalone tensorflow/serving grpc client for ARM
Add a description, image, and links to the serving topic page so that developers can more easily learn about it.
To associate your repository with the serving topic, visit your repo's landing page and select "manage topics."