Running inference with OpenVINOExecutionProvider fails if it was executed with CUDAExecutionProvider #18042
Labels
ep:CUDA
issues related to the CUDA execution provider
ep:OpenVINO
issues related to OpenVINO execution provider
stale
issues that have not been addressed in a while; categorized by a bot
Describe the issue
Hello, dear maintainers,
I faced the following:
I have a setup running onnx model with
CUDAExecutor
provider.I changed the code and switched to
OpenVINOExecutionProvider
and got such an error after running:It works without issues if I copy and run the same model to a new location.
The only working around I found is passing
provider_options
options to theInferenceSession
with the following data:provider_options={"cache_dir": "/home/user/.cache/.onnx_cache/",} and creating this directory. Then, it works without any issues.
The versions I use are the following:
Please ask me if you need any more information.
Thanks for your time and help.
To reproduce
Create an onnx-model.
Run the model with the
InferenceSession
withCUDAExecutionProvider
.Run the same model with the
InferenceSession
withOpenVINOExecutionProvider
.Urgency
I found a workaround, so I'm not blocked.
Platform
Linux
OS Version
Ubuntu 16.04
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.15.0
ONNX Runtime API
Python
Architecture
X86
Execution Provider
OpenVINO
Execution Provider Library Version
10.1
The text was updated successfully, but these errors were encountered: