Describing How to Enable OpenVINO Execution Provider for ONNX Runtime
-
Updated
Jun 29, 2020 - C++
Describing How to Enable OpenVINO Execution Provider for ONNX Runtime
Demonstrate how to use ONNX importer API in Intel OpenVINO toolkit. This API allows user to load an ONNX model and run inference with OpenVINO Inference Engine.
This repository deals with the use of an Open-source FPGA Plugin to execute Neural Networks on multiple Intel Stratix 10 FPGAs.
Add a description, image, and links to the openvino-toolkit topic page so that developers can more easily learn about it.
To associate your repository with the openvino-toolkit topic, visit your repo's landing page and select "manage topics."