A scalable inference server for models optimized with OpenVINO™
-
Updated
May 27, 2024 - C++
A scalable inference server for models optimized with OpenVINO™
RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.
Serve pytorch / torch models using Drogon
Serving object detection models on different hardware.
Add a description, image, and links to the model-serving topic page so that developers can more easily learn about it.
To associate your repository with the model-serving topic, visit your repo's landing page and select "manage topics."