Standardized Serverless ML Inference Platform on Kubernetes
-
Updated
Oct 29, 2024 - Python
Standardized Serverless ML Inference Platform on Kubernetes
Hopsworks - Data-Intensive AI platform with a Feature Store
🪐 1-click Kubeflow using ArgoCD
AWS SageMaker, SeldonCore, KServe, Kubeflow & MLflow, VectorDB
My repo for the Machine Learning Engineering bootcamp 2022 by DataTalks.Club
Carbon Limiting Auto Tuning for Kubernetes
Deploying machine learning model using 10+ different deployment tools
Collection of bet practices, reference architectures, examples, and utilities for foundation model development and deployment on AWS.
Client/Server system to perform distributed inference on high load systems.
Hands-on labs on deploying machine learning models with tf-serving and KServe
A demo to accompany our blogpost "Scalable Machine Learning with Kafka Streams and KServe"
Everything to get industrial kubeflow applications running in production
TeiaCareInferenceClient is a C++ inference client library that implements KServe protocol.
A scalable RAG-based Wikipedia Chat Assistant that leverages the Llama-2-7b-chat LLM, inferenced using KServe
KServe TrustyAI explainer
KServe Inference Graph Example
Kubeflow examples - Notebooks, Pipelines, Models, Model tuning and more
An end to end machine learning prediction for rossamann store problem
Add a description, image, and links to the kserve topic page so that developers can more easily learn about it.
To associate your repository with the kserve topic, visit your repo's landing page and select "manage topics."