Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running inference with OpenVINOExecutionProvider fails if it was executed with CUDAExecutionProvider #18042

Closed
szobov opened this issue Oct 20, 2023 · 2 comments
Labels
ep:CUDA issues related to the CUDA execution provider ep:OpenVINO issues related to OpenVINO execution provider stale issues that have not been addressed in a while; categorized by a bot

Comments

@szobov
Copy link

szobov commented Oct 20, 2023

Describe the issue

Hello, dear maintainers,
I faced the following:
I have a setup running onnx model with CUDAExecutor provider.
I changed the code and switched to OpenVINOExecutionProvider and got such an error after running:

ERROR - [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: /home/onnxruntimedev/onnxruntime/onnxruntime/core/providers/openvino/ov_interface.cc:53 onnxruntime::openvino_ep::OVExeNetwork onnxruntime::openvino_ep::OVCore::LoadNetwork(const string&, std::string&, ov::AnyMap&, std::string) [OpenVINO-EP]  Exception while Loading Network for graph: OpenVINOExecutionProvider_OpenVINO-EP-subgraph_3_0Check 'false' failed at src/inference/src/core.cpp:149:
Check 'false' failed at src/frontends/onnx/frontend/src/core/graph_cache.cpp:25:
output/pos:0 node not found in graph cache

Traceback (most recent call last):
  File <stripped>/onnxwrapper.py", line 46, in __load_session
    session = onnxruntime.InferenceSession(
  File "/opt/BDK/python/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 388, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "<stripped>/python/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 440, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: /home/onnxruntimedev/onnxruntime/onnxruntime/core/providers/openvino/ov_interface.cc:53 onnxruntime::openvino_ep::OVExeNetwork onnxruntime::openvino_ep::OVCore::LoadNetwork(const string&, std::string&, ov::AnyMap&, std::string) [OpenVINO-EP]  Exception while Loading Network for graph: OpenVINOExecutionProvider_OpenVINO-EP-subgraph_3_0Check 'false' failed at src/inference/src/core.cpp:149:
Check 'false' failed at src/frontends/onnx/frontend/src/core/graph_cache.cpp:25:
output/pos:0 node not found in graph cache

It works without issues if I copy and run the same model to a new location.

The only working around I found is passing provider_options options to the InferenceSession with the following data:
provider_options={"cache_dir": "/home/user/.cache/.onnx_cache/",} and creating this directory. Then, it works without any issues.

The versions I use are the following:

  • Python 3.10.13 & onnxruntime-openvino 1.15.0
  • Python 3.6 & onnxruntime-gpu 1.4.0
  • NVIDIA-SMI 418.39
  • Driver Version: 418.39
  • CUDA Version: 10.1

Please ask me if you need any more information.
Thanks for your time and help.

To reproduce

Create an onnx-model.
Run the model with the InferenceSession with CUDAExecutionProvider.
Run the same model with the InferenceSession with OpenVINOExecutionProvider.

Urgency

I found a workaround, so I'm not blocked.

Platform

Linux

OS Version

Ubuntu 16.04

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.15.0

ONNX Runtime API

Python

Architecture

X86

Execution Provider

OpenVINO

Execution Provider Library Version

10.1

@github-actions github-actions bot added ep:CUDA issues related to the CUDA execution provider ep:OpenVINO issues related to OpenVINO execution provider labels Oct 20, 2023
Copy link
Contributor

This issue has been automatically marked as stale due to inactivity and will be closed in 7 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

@github-actions github-actions bot added the stale issues that have not been addressed in a while; categorized by a bot label Nov 20, 2023
Copy link
Contributor

github-actions bot commented Jan 5, 2024

This issue has been automatically closed due to inactivity. Please reactivate if further support is needed.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:CUDA issues related to the CUDA execution provider ep:OpenVINO issues related to OpenVINO execution provider stale issues that have not been addressed in a while; categorized by a bot
Projects
None yet
Development

No branches or pull requests

1 participant