You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : Load model from ./model1.onnx failed:This is an invalid model. Type Error: Type 'tensor(int64)' of input parameter (655) of operator (Clip) in node (Clip_354) is invalid.
#10399
Closed
quanliu1991 opened this issue
Jan 26, 2022
· 4 comments
Describe the bug
Use torch.onnx.export() had coverted faster_rcnn_R_50_C4_1x.yaml file of Detectron2 FasterRCNN model to model2.onnx, but when
sess = onnxruntime.InferenceSession(model_path, sess_options=sess_opt, providers=providers) ,an InvalidGraph error occurred.
System information
OS Platform :Linux ContOS 7.9
ONNX Runtime installed from binary:
ONNX Runtime version:gpu 1.10
Python version:3.7.11
CUDA/cuDNN version:11.4/8.2.4
detectron2: 0.6
To Reproduce
pytorch covert onnx
use detectron2/tools/deploy/export_model.py --config-file faster_rcnn_R_50_C4_1x.yaml export onnx.
onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : Load model from ./model1.onnx failed:This is an invalid model. Type Error: Type 'tensor(int64)' of input parameter (655) of operator (Clip) in node (Clip_354) is invalid.
The following error occurs in onnxruntime.InferenceSession:
2022-01-30 19:02:16.145065205 [E:onnxruntime:, inference_session.cc:1448 operator()] Exception during initialization: /onnxruntime_src/onnxruntime/core/providers/tensorrt/tensorrt_execution_provider.cc:925 SubGraphCollection_t onnxruntime::TensorrtExecutionProvider::GetSupportedList(SubGraphCollection_t, int, int, const onnxruntime::GraphViewer&, bool*) const [ONNXRuntimeError] : 1 : FAIL : TensorRT input: 717 has no shape specified. Please run shape inference on the onnx model first. Details can be found in https://www.onnxruntime.ai/docs/reference/execution-providers/TensorRT-ExecutionProvider.html#shape-inference-for-tensorrt-subgraphs
Describe the bug
Use torch.onnx.export() had coverted
faster_rcnn_R_50_C4_1x.yaml
file of Detectron2 FasterRCNN model tomodel2.onnx
, but whensess = onnxruntime.InferenceSession(model_path, sess_options=sess_opt, providers=providers) ,an InvalidGraph error occurred.
System information
OS Platform :Linux ContOS 7.9
ONNX Runtime installed from binary:
ONNX Runtime version:gpu 1.10
Python version:3.7.11
CUDA/cuDNN version:11.4/8.2.4
detectron2: 0.6
To Reproduce
use
detectron2/tools/deploy/export_model.py
--config-file faster_rcnn_R_50_C4_1x.yaml export onnx.error log:
Expected behavior
model2.onnx run work.
Additional context
moeld2.onnx download link:
https://drive.google.com/file/d/18sRJ6GR2LkhycG3EdYswWKQfDQiuB7HV/view?usp=sharing
The text was updated successfully, but these errors were encountered: