ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
-
Updated
Sep 23, 2024 - C++
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
A simple neural network inference framework
Make a distributed deep learning framework from scratch
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Optimization Framework for Tosa-Dialect (MLIR) based Distributed or NUMA targeted workloads
Add a description, image, and links to the ai-framework topic page so that developers can more easily learn about it.
To associate your repository with the ai-framework topic, visit your repo's landing page and select "manage topics."