Open
Description
Describe the issue
For the following onnx model,
it can be executed by the CUDAExecutionProvider. The outputs are as follows:
[array([[[[-0.6251511 , -0.5969454 , 0.13909698, -0.27613088],
[ 0.00713499, -0.06079938, 0.07727996, 0.09848484],
[-1.3456749 , 1.2606536 , -2.1878982 , -0.06534719],
[ 1.7707846 , -1.3586597 , -1.1241275 , 1.1630629 ]],
[[ 1.5288247 , 1.5974303 , -0.09236689, 1.2441877 ],
[-0.4477599 , 0.20678943, 0.2639845 , -0.38207826],
[ 1.0393754 , 2.0902128 , -0.4672897 , 1.8636966 ],
[ 1.2390026 , -0.38409668, -0.29675618, -2.113882 ]],
[[-1.2571493 , 0.19212584, -0.48622572, 1.3817313 ],
[ 0.2130472 , 0.12426434, 0.18794645, -1.4493794 ],
[ 0.38522267, 0.55802476, -1.0083855 , 0.12431115],
[-1.9025584 , -1.1199955 , -0.97456354, 0.5826947 ]]]],
dtype=float32)]
However, when I run it using the CPUExecutionProvider, onnxruntime crashes as follows:
Traceback (most recent call last):
File "/home/carla/Documents/test/test.py", line 31, in <module>
test()
File "/home/carla/Documents/test/test.py", line 26, in test
cpu_ort_session = onnxruntime.InferenceSession(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/carla/anaconda3/envs/onnruntime-gpu/lib/python3.12/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 471, in __init__
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "/home/carla/anaconda3/envs/onnruntime-gpu/lib/python3.12/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 570, in _create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for Resize(13) node with name ''
To reproduce
Environment
OS: Ubuntu 20.04
onnxruntime: 1.23.0.dev20250515001
CUDA: cuda-12.2.2::cuda-toolkit
CUDNN: 9.1.1.17
NVIDIA GPU: GeForce RTX 3080
NVIDIA Driver Version: 535.183.01
Python Version: 3.12.9
Steps to reproduce
This bug can be reproduced by the following code with the model in the attachment.
from typing import Dict, List, Literal, Optional
import sys
import numpy as np
import onnx
import onnxruntime
import pickle
def test():
onnx_model = onnx.load("111.onnx")
with open("inputs.pkl", "rb") as fp:
inputs = pickle.load(fp)
sess_options = onnxruntime.SessionOptions()
gpu_ort_session = onnxruntime.InferenceSession(
onnx_model.SerializeToString(), sess_options, providers=["CUDAExecutionProvider"]
)
gpu_ort_output = gpu_ort_session.run([], inputs)
print(gpu_ort_output)
#------------------------------------------------------
cpu_ort_session = onnxruntime.InferenceSession(
onnx_model.SerializeToString(), sess_options, providers=["CPUExecutionProvider"]
)
if __name__ == "__main__":
test()
Urgency
No response
Platform
Linux
OS Version
Ubuntu 20.04
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.23.0.dev20250515001
ONNX Runtime API
Python
Architecture
X64
Execution Provider
Default CPU, CUDA
Execution Provider Library Version
cuda-12.2.2::cuda-toolkit, cudnn-9.1.1.17
Metadata
Metadata
Assignees
Labels
No labels