Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TensorRT 6.0 ONNX_Parser doesn't support the ONNX model exported by PyTorch 1.3.1 #376

Closed
RizhaoCai opened this issue Feb 12, 2020 · 1 comment

Comments

@RizhaoCai
Copy link

@RizhaoCai RizhaoCai commented Feb 12, 2020

Description

TensorRT 6.0 ONNX Parser doesn't support the ONNX model exported by PyTorch 1.3.1.

The TensorRT ONNX Parser seems not well compatible with the new PyTorch version. 1.3 or 1.4.

I have a Jetson TX2 (Jetpack 4.3, TensorRT6) to deploy my model.
This is my workflow:

  1. Run training on GTX 1080 with PyTorch 1.3.1
  2. Export the model to an ONNX model with the ONNX exporter shipped with PyTorch 1.3.1.
  3. Transfer the ONNX model to my TX2
  4. Use the ONNX parser of TensorRT (Python API) to build my engine.

However, it will tell when building the engine that
[TensorRT] ERROR: Network must have at least one output

Although there are some issues related to this error, e.g. #319, #286, but they did not take the PyTorch version into account. So here I point it out. This issue is open for reminding those who have the same problem but don't know how to solve.

At the very beginning, I thought to change to opset 7 may help as it is mentioned at the TensorRT doc

In general, the newer version of the ONNX Parser is designed to be backward compatible up to opset 7

Then, I tried
torch.onnx.export(model, dummy_input, onnx_model_path, verbose=True, input_names=input_names, output_names=output_names, opset_version=7)

When I used PyTorch 1.3.1, the problem was still there, and the size of the exported model is 13,599 KB.
Interestingly, when I used PyTorch 1.2.0, the size was 13,986 KB, meaning that different versions PyTorch with the same versions opset doesn't guarantee you can export the same ONNX model.

Therefore. using PyTorch 1.2 may help you solve the problem (1.1 also)

Environment

TensorRT Version: 6.0.1
GPU Type: TX2
CUDA Version: 10.0
CUDNN Version: 7.6
Operating System + Version: ubuntu 18.04
Python Version (if applicable): 3.6
PyTorch Version (if applicable): 1.3.1

@rmccorm4

This comment has been minimized.

Copy link
Collaborator

@rmccorm4 rmccorm4 commented Feb 13, 2020

Hi @RizhaoCai ,

PyTorch 1.3 + TensorRT 6 is a known incompatability. Please use PyTorch <= 1.2 for TensorRT 6, or upgrade to TensorRT 7 (when available for Jetson since you mentioned TX2).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
2 participants
You can’t perform that action at this time.