-
Updated
Jul 15, 2019 - C++
#
inference-server
Here are 5 public repositories matching this topic...
Client/Server system to perform distributed inference on high load systems.
docker
cmake
deep-neural-networks
ai
cpp
grpc
conan
inference-server
inference-engine
onnxruntime
kserve
inference-client
-
Updated
Jan 23, 2023 - C++
ONNX Runtime Server: The ONNX Runtime Server is a server that provides TCP and HTTP/HTTPS REST APIs for ONNX inference.
machine-learning
ai
deep-learning
cuda
inference-server
nueral-networks
contributions-welcome
onnx
onnxruntime
-
Updated
Jun 28, 2024 - C++
A REST API for Caffe using Docker and Go
-
Updated
Jul 20, 2018 - C++
Improve this page
Add a description, image, and links to the inference-server topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the inference-server topic, visit your repo's landing page and select "manage topics."