From 937b0e6554c810f1c10b25b715896c15922a0f27 Mon Sep 17 00:00:00 2001 From: Vishnudas Thaniel S Date: Wed, 3 Jul 2024 00:08:17 +0530 Subject: [PATCH] Update ReadMeOV.rst --- docs/python/ReadMeOV.rst | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/docs/python/ReadMeOV.rst b/docs/python/ReadMeOV.rst index 6ef16e1378139..86914699bbf6d 100644 --- a/docs/python/ReadMeOV.rst +++ b/docs/python/ReadMeOV.rst @@ -7,6 +7,7 @@ OpenVINO™ Execution Provider for ONNX Runtime accelerates inference across man - Intel® CPUs - Intel® integrated GPUs - Intel® discrete GPUs + - Intel® integrated NPUs (Windows only) Installation ------------ @@ -15,26 +16,27 @@ Requirements ^^^^^^^^^^^^ - Ubuntu 18.04, 20.04, RHEL(CPU only) or Windows 10 - 64 bit -- Python 3.8 or 3.9 or 3.10 for Linux and only Python3.10 for Windows +- Python 3.9 or 3.10 or 3.11 for Linux and Python 3.10, 3.11 for Windows This package supports: - Intel® CPUs - Intel® integrated GPUs - Intel® discrete GPUs + - Intel® integrated NPUs (Windows only) ``pip3 install onnxruntime-openvino`` Please install OpenVINO™ PyPi Package separately for Windows. For installation instructions on Windows please refer to `OpenVINO™ Execution Provider for ONNX Runtime for Windows `_. -**OpenVINO™ Execution Provider for ONNX Runtime** Linux Wheels comes with pre-built libraries of OpenVINO™ version 2023.0.0 eliminating the need to install OpenVINO™ separately. The OpenVINO™ libraries are prebuilt with CXX11_ABI flag set to 0. +**OpenVINO™ Execution Provider for ONNX Runtime** Linux Wheels comes with pre-built libraries of OpenVINO™ version 2024.1.0 eliminating the need to install OpenVINO™ separately. For more details on build and installation please refer to `Build `_. Usage ^^^^^ -By default, Intel® CPU is used to run inference. However, you can change the default option to either Intel® integrated or discrete GPU. +By default, Intel® CPU is used to run inference. However, you can change the default option to either Intel® integrated GPU, discrete GPU, integrated NPU (Windows only). Invoke `the provider config device type argument `_ to change the hardware on which inferencing is done. For more API calls and environment variables, see `Usage `_.