Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : Load model from ./model1.onnx failed:This is an invalid model. Type Error: Type 'tensor(int64)' of input parameter (655) of operator (Clip) in node (Clip_354) is invalid. #10399

Closed
quanliu1991 opened this issue Jan 26, 2022 · 4 comments
Labels
converter related to ONNX converters ep:CUDA issues related to the CUDA execution provider

Comments

@quanliu1991
Copy link

Describe the bug
Use torch.onnx.export() had coverted faster_rcnn_R_50_C4_1x.yaml file of Detectron2 FasterRCNN model to model2.onnx, but when
sess = onnxruntime.InferenceSession(model_path, sess_options=sess_opt, providers=providers) ,an InvalidGraph error occurred.

System information

  • OS Platform :Linux ContOS 7.9

  • ONNX Runtime installed from binary:

  • ONNX Runtime version:gpu 1.10

  • Python version:3.7.11

  • CUDA/cuDNN version:11.4/8.2.4

  • detectron2: 0.6

To Reproduce

  • pytorch covert onnx
    use detectron2/tools/deploy/export_model.py --config-file faster_rcnn_R_50_C4_1x.yaml export onnx.
  • onnxruntime run onnx
          providers = [
                ('CUDAExecutionProvider', {
                    'device_id': device_id,
                    'cudnn_conv_algo_search': 'HEURISTIC',
                })
            ]
        sess_opt = rt.SessionOptions()
        sess = rt.InferenceSession("./model2.onnx", sess_options=sess_opt, providers=providers)

error log:

onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : Load model from ./model1.onnx failed:This is an invalid model. Type Error: Type 'tensor(int64)' of input parameter (655) of operator (Clip) in node (Clip_354) is invalid.

Expected behavior
model2.onnx run work.

Additional context
moeld2.onnx download link:
https://drive.google.com/file/d/18sRJ6GR2LkhycG3EdYswWKQfDQiuB7HV/view?usp=sharing

@ashbhandare ashbhandare added core runtime issues related to core runtime ep:CUDA issues related to the CUDA execution provider labels Jan 26, 2022
@yuslepukhin yuslepukhin added converter related to ONNX converters and removed core runtime issues related to core runtime labels Jan 28, 2022
@garymm
Copy link
Contributor

garymm commented Jan 28, 2022

This is probably a bug in torch.onnx.export, so if you can repro this on the latest PyTorch nightly you can file a bug in github.com/pytorch/pytorch.

However I think you can probably work around this by exporting with op set version 12.

I guess edit this line:
https://github.com/facebookresearch/detectron2/blob/main/tools/deploy/export_model.py#L127

Why I think this could work:
int64 support for Clip was added in op set 12.
Compare Clip-12 with Clip-11.

@quanliu1991
Copy link
Author

@garymm
First , thank you very much, after I re-exported onnx named model2with12.onnx with op set version 12, it works on CUDA.

However, I try to use the same onnx when ep is Tensorrt a new error occurs: TensorRT input: 717 has no shape specified.
code show as below:

providers = [
                ('TensorrtExecutionProvider', {
                    'device_id': 0,
                })]
sess_opt = onnxruntime.SessionOptions()
sess = onnxruntime.InferenceSession(model_path, sess_options=sess_opt, providers=providers)

image=np.random.randint(1, 255, size=(3, 800, 1202), dtype=np.nint8)
sess.run([sess.get_outputs()[0].name], {sess.get_inputs()[0].name: image})

The following error occurs in onnxruntime.InferenceSession:

2022-01-30 19:02:16.145065205 [E:onnxruntime:, inference_session.cc:1448 operator()] Exception during initialization: /onnxruntime_src/onnxruntime/core/providers/tensorrt/tensorrt_execution_provider.cc:925 SubGraphCollection_t onnxruntime::TensorrtExecutionProvider::GetSupportedList(SubGraphCollection_t, int, int, const onnxruntime::GraphViewer&, bool*) const [ONNXRuntimeError] : 1 : FAIL : TensorRT input: 717 has no shape specified. Please run shape inference on the onnx model first. Details can be found in https://www.onnxruntime.ai/docs/reference/execution-providers/TensorRT-ExecutionProvider.html#shape-inference-for-tensorrt-subgraphs

run symbolic_shape_infer have an error occurred.

python symbolic_shape_infer.py --input ./model2with12.onnx  --output ./out_model2with12.onnx --auto_merge --verbose 3

I don't know how to solve this kind of problem,I expected the onnx model to work on ep:Tensorrt.

model2with12.onnx download link:
https://drive.google.com/file/d/1_egymUZukkjzNfNDSVYIzLGpGRBfuIRQ/view?usp=sharing
input image ndarray info:
shape is [3,800,1202]
dtype is unit8

@garymm
Copy link
Contributor

garymm commented Jan 31, 2022

@quanliu1991 please open a new issue for that.
Closing this since you worked around the original issue.

@garymm garymm closed this as completed Jan 31, 2022
@quanliu1991
Copy link
Author

@garymm thank you, I had open a new issue #10443 #.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
converter related to ONNX converters ep:CUDA issues related to the CUDA execution provider
Projects
None yet
Development

No branches or pull requests

4 participants