The easiest way to serve AI/ML models in production - Build Model Inference Service, LLM APIs, Multi-model Inference Graph/Pipelines, LLM/RAG apps, and more!
-
Updated
Jul 11, 2024 - Python
The easiest way to serve AI/ML models in production - Build Model Inference Service, LLM APIs, Multi-model Inference Graph/Pipelines, LLM/RAG apps, and more!
Scaffolding for serving ml model APIs using FastAPI
Kafka variant of the MLOps Level 1 stack
Fast, private data connectors for AI ⚡️🤖
An easy-to-use tool for making web service with API from your own Python functions.
Crack SWE (ML) / DS MAANG Interviews
A task queue enabling websites to serve ML models -- with RabbitMQ, Celery, all the good stuff.
Classification of scientific articles from Frontiers publisher. Deployment ready. Usable as template for text-classification use-cases.
A project to build an ETL pipeline and ML application to help respond to disaster events faster
This neural network can help determine the correspondence of the attached video topic to the video topics recommended by YouTube.
A library for authoring DLT pipelines via meta-programming patterns and deploying to Databricks workspaces.
⛰️ machine learning pipeline for disaster alert
Detecting News Generated by LLMs
Add a description, image, and links to the ml-engineering topic page so that developers can more easily learn about it.
To associate your repository with the ml-engineering topic, visit your repo's landing page and select "manage topics."