Skip to content

Releases: intel/onnxruntime

OpenVINO™ Execution Provider for ONNXRuntime 5.3.1

12 Aug 20:52
154084e
Compare
Choose a tag to compare

Description:
OpenVINO™ Execution Provider For ONNXRuntime v5.3.1 Release based on the latest OpenVINO™ 2024.3 Release

For all the latest information, Refer to our official documentation:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html

Announcements:

OpenVINO™ version upgraded to 2024.3. This also provides functional bug fixes.

Please refer to the OpenVINO™ Execution Provider For ONNXRuntime build instructions for information on system pre-requisites as well as instructions to build from source.
https://onnxruntime.ai/docs/build/eps.html#openvino

Modifications:

  • Supports OpenVINO 2024.3
  • fix for setting precision with Auto Plugin
  • changes to ensure we use fast compile for model path but not for auto:gpu,cpu
  • Updated fix for setting cache with Auto Plugin
  • Device Update accepts GPU.1 on runtime as well with Auto

OpenVINO™ Execution Provider for ONNXRuntime 5.3

21 Jun 15:26
4573740
Compare
Choose a tag to compare

Description:
OpenVINO™ Execution Provider For ONNXRuntime v5.3 Release based on the latest OpenVINO™ 2024.1 Release and OnnxRuntime 1.18.0 Release.

For all the latest information, Refer to our official documentation:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html

Announcements:

OpenVINO™ version upgraded to 2024.1. This provides functional bug fixes, and new features from the previous release.
This release supports ONNXRuntime 1.18.0 with the latest OpenVINO™ 2024.1 release.
Please refer to the OpenVINO™ Execution Provider For ONNXRuntime build instructions for information on system pre-requisites as well as instructions to build from source.
https://onnxruntime.ai/docs/build/eps.html#openvino

Modifications:

  • Supports OpenVINO 2024.1.
  • Supports NPU as a device option.
  • Separating Device/Precision Device will be CPU, GPU, NPU and inference precision will be set as provider option . CPU_FP32, GPU_FP32 options are deprecated.
  • Importing Precompiled Blobs to OpenVINO. It will be possible to import Precompiled Blobs to OpenVINO.
  • OVEP Windows Logging Support for NPU. It is possible to obtain NPU Profiling information from debug build of OpenVINO.
  • Packages support NPU on Windows.
  • Supports Priority through Runtime Provider Option.

Samples:
https://github.com/microsoft/onnxruntime-inference-examples

Python Package:
https://pypi.org/project/onnxruntime-openvino/

Installation and usage Instructions on Windows:

pip install onnxruntime-openvino

/* Steps If using python openvino package to set openvino runtime environment */
pip install openvino==2024.1.0
<Add these 2 lines in the application code>
import onnxruntime.tools.add_openvino_win_libs as utils
utils.add_openvino_libs_to_path()

ONNXRuntime APIs usage:
Please refer to the link below for Python/C++ APIs:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html#configuration-options

OpenVINO™ Execution Provider for ONNXRuntime 5.2.1

12 Apr 12:12
76d7e17
Compare
Choose a tag to compare

Description:
OpenVINO™ Execution Provider For ONNXRuntime v5.2.1 Release based on the latest OpenVINO™ 2024.0 Release

For all the latest information, Refer to our official documentation:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html

Announcements:

OpenVINO™ version upgraded to 2024.0. This provides Nuget packages aligned with OpenVINO™ 2024.0 Release.

Please refer to the OpenVINO™ Execution Provider For ONNXRuntime build instructions for information on system pre-requisites as well as instructions to build from source.
https://onnxruntime.ai/docs/build/eps.html#openvino

OpenVINO™ Execution Provider for ONNXRuntime 5.2

07 Mar 12:37
8f5c79c
Compare
Choose a tag to compare

Description:
OpenVINO™ Execution Provider For ONNXRuntime v5.2 Release based on the latest OpenVINO™ 2023.3 Release and OnnxRuntime 1.17.1 Release.

For all the latest information, Refer to our official documentation:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html

Announcements:

OpenVINO™ version upgraded to 2023.3. This provides functional bug fixes, and capability changes from the previous 2022.3.3 release.
This release supports ONNXRuntime 1.17.1 with the latest OpenVINO™ 2023.3 release.
Please refer to the OpenVINO™ Execution Provider For ONNXRuntime build instructions for information on system pre-requisites as well as instructions to build from source.
https://onnxruntime.ai/docs/build/eps.html#openvino

Modifications:

  • Use the provider option disable_dynamic_shapes to infer only with static inputs. The default behaviour is to attempt to compile and infer with symbolic shapes.
  • The provider option enable_dynamic_shapes is deprecated and will be removed in next release.
  • Introduce AppendExecutionProvider_OpenVINO_V2 API and support for OV 2023.3.
  • Add support for OpenVINO 2023.3 official release only
  • Logging in Debug mode now includes the runtime properties set for devices
  • Fix issue in using external weights through OpenVINO with the read_model API: microsoft#17499
  • Nuget package only contains OnnxRuntime Libs. Please set up the openvino environment while running the dotnet application

Samples:
https://github.com/microsoft/onnxruntime-inference-examples

Python Package:
https://pypi.org/project/onnxruntime-openvino/

Installation and usage Instructions on Windows:

pip install onnxruntime-openvino

/* Steps If using python openvino package to set openvino runtime environment */
pip install openvino==2023.3.0
<Add these 2 lines in the application code>
import onnxruntime.tools.add_openvino_win_libs as utils
utils.add_openvino_libs_to_path()

ONNXRuntime APIs usage:
Please refer to the link below for Python/C++ APIs:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html#configuration-options

OpenVINO™ Execution Provider for ONNXRuntime 5.1

20 Oct 15:52
2d16b57
Compare
Choose a tag to compare

Description:
OpenVINO™ Execution Provider For ONNXRuntime v5.1 Release based on the latest OpenVINO™ 2023.1 Release and OnnxRuntime 1.16 Release.

For all the latest information, Refer to our official documentation:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html

Announcements:

OpenVINO™ version upgraded to 2023.1. This provides functional bug fixes, and capability changes from the previous 2022.3.1 release.
This release supports ONNXRuntime 1.16 with the latest OpenVINO™ 2023.1 release.
Please refer to the OpenVINO™ Execution Provider For ONNXRuntime build instructions for information on system pre-requisites as well as instructions to build from source.
https://onnxruntime.ai/docs/build/eps.html#openvino
New Extendible API added for better backward compatibility
Num Streams Support Added

Samples:
https://github.com/microsoft/onnxruntime-inference-examples

Python Package:
https://pypi.org/project/onnxruntime-openvino/

Installation and usage Instructions on Windows:

pip install onnxruntime-openvino
pip install openvino

<Add these 2 lines in the application code>

import onnxruntime.tools.add_openvino_win_libs as utils
utils.add_openvino_libs_to_path()

ONNXRuntime APIs usage:
Please refer to the link below for Python/C++ APIs:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html#configuration-options

Custom Release OpenVINO™ Execution Provider for OnnxRuntime 1.15

03 Oct 14:26
3c47cf2
Compare
Choose a tag to compare

We are releasing Custom OpenVINO™ Execution Provider for OnnxRuntime 1.15 with depreciating OpenVINO 1.0 API and increasing operator coverage. This release is based on OpenVINO™ 2023.1.

For all the latest information, Refer to our official documentation:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html

Announcements:

  1. OpenVINO™ version upgraded to 2023.1.0. This provides functional bug fixes, and capability changes from the previous 2023.0.0 release.
  2. Improved FIL with custom OpenVINO API for model loading across CPU and GPU accelerators.
  3. Added bug fixes for model caching feature.
  4. Operator coverage compliant with OV 2023.1

Please refer to the OpenVINO™ Execution Provider For ONNXRuntime build instructions for information on system pre-requisites as well as instructions to build from source.
https://onnxruntime.ai/docs/build/eps.html#openvino

ONNXRuntime APIs usage:
Please refer to the link below for Python/C++ APIs:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html#configuration-options

OpenVINO Execution Provider for OnnxRuntime 5.0

21 Jun 02:28
Compare
Choose a tag to compare

Description:
OpenVINO™ Execution Provider For ONNXRuntime v5.0 Release based on the latest OpenVINO™ 2023.0 Release and OnnxRuntime 1.15 Release.

For all the latest information, Refer to our official documentation:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html

Announcements:

  1. OpenVINO™ version upgraded to 2023.0.0. This provides functional bug fixes, and capability changes from the previous 2022.3.0 release.
  2. This release supports ONNXRuntime 1.15 with the latest OpenVINO™ 2023.0 release.
  3. Hassle free user experience for OVEP Python developers on windows platform. Just PIP install is all you required on windows now.
  4. Complete full model support for stable Diffusion with dynamic shapes on CPU/GPU.
  5. Improved FIL with custom OpenVINO API for model loading.
  6. Model caching is now generic across all accelerators. Kernel caching is enabled for partially supported models.

Please refer to the OpenVINO™ Execution Provider For ONNXRuntime build instructions for information on system pre-requisites as well as instructions to build from source.
https://onnxruntime.ai/docs/build/eps.html#openvino

Samples:
https://github.com/microsoft/onnxruntime-inference-examples

Python Package:
https://pypi.org/project/onnxruntime-openvino/

Installation and usage Instructions on Windows:

  pip install onnxruntime-openvino
  pip install openvino
  
 <Add these 2 lines in the application code>

 import onnxruntime.tools.add_openvino_win_libs as utils
 utils.add_openvino_libs_to_path()

ONNXRuntime APIs usage:
Please refer to the link below for Python/C++ APIs:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html#configuration-options

Custom Release Branch OVEP 1.14

03 Apr 14:06
3ebf123
Compare
Choose a tag to compare
Pre-release

We are releasing Custom Release for 1.14 with specific changes for Model Caching and improving First Inference Latency
This release is based on custom OpenVINO™. Dependent OpenVINO™ libs are part of zip file.

  • Added additional ONNX op support coverage.
  • Improved FIL with custom OpenVINO API for model loading.
  • Model caching along with Kernel caching is enabled.
  • Handled fallback at session creation time at the application level.

For all the latest information, Refer to our official documentation:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html

Custom Release Branch OVEP1.13

03 Apr 14:06
Compare
Choose a tag to compare
Pre-release

We are releasing Custom Release for 1.13.1 with specific changes for Model Caching and improving First Inference Latency
This release is based on custom OpenVINO™. Dependent OpenVINO™ libs are part of zip file.

  • Added additional ONNX op support coverage.
  • Improved FIL with custom OpenVINO API for model loading.
  • Model caching along with Kernel caching is enabled.
  • Handled fallback at session creation time at the application level.

For all the latest information, Refer to our official documentation:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html

OpenVINO EP v4.3 Release for ONNX Runtime & OpenVINO 2022.3

03 Apr 11:58
Compare
Choose a tag to compare

Description:
OpenVINO™ Execution Provider For ONNXRuntime v4.3 Release based on the latest OpenVINO™ 2022.3 Release.

For all the latest information, Refer to our official documentation:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html

Announcements:

  • OpenVINO™ version upgraded to 2022.3.0. This provides functional bug fixes, and capability changes from the previous 2022.2.0 release.
  • This release supports ONNXRuntime with the latest OpenVINO™ 2022.3 release.
  • Improvement in the First Inference Latency for OnnxRuntime OpenVino Execution Provider.
  • Model caching along with Kernel caching is enabled for GPU.
  • Minor bug fixes and code refactoring is done
  • Migrated to OpenVINO™ 2.0 API's. Removed support for OpenVINO™ 1.0 ( v2021.3 and v2021.4)
  • Backward compatibility support for older OpenVINO™ versions (OV 2022.2, OV 2022.2) is available.
  • Replacing the API's for model caching use_compile_network and blob_dump_path with single cache_dir in session creation API.

Build steps:
Please refer to the OpenVINO™ Execution Provider For ONNXRuntime build instructions for information on system pre-requisites as well as instructions to build from source.
https://onnxruntime.ai/docs/build/eps.html#openvino

Samples:
https://github.com/microsoft/onnxruntime-inference-examples

Python Package:
https://pypi.org/project/onnxruntime-openvino/

ONNXRuntime APIs usage:
Please refer to the link below for Python/C++ APIs:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html#configuration-options