Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

F.conv onnx export better support #54314

Closed
lucasjinreal opened this issue Mar 19, 2021 · 3 comments
Closed

F.conv onnx export better support #54314

lucasjinreal opened this issue Mar 19, 2021 · 3 comments
Labels
module: onnx Related to torch.onnx triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@lucasjinreal
Copy link

lucasjinreal commented Mar 19, 2021

Pls test this simple model export:

class MG(nn.Module):

    def __init__(self):
        super().__init__()

    def forward(self, x, b):
        preds = F.conv2d(x, b,
                             stride=1)
        return preds


torch_model = MG()
x = torch.randn([1, 4, 24, 24])
b = torch.randn([8, 4, 3, 3])
torch_out = torch_model(x, b)

# Export the model
torch.onnx.export(torch_model,               # model being run
                  (x, b),
                  "a.onnx",
                  export_params=True,        # store the trained parameter weights inside the model file
                  opset_version=12,          # the ONNX version to export the model to
                  do_constant_folding=True,
                  verbose=True)
print('Done!')

This is a dead simple model, but Pytorch can not export it make it convertable to trt.

When I convert to trt, it got:

❯ onnx2trt a.onnx 
----------------------------------------------------------------
Input filename:   a.onnx
ONNX IR version:  0.0.6
Opset version:    12
Producer name:    pytorch
Producer version: 1.7
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
Parsing model
While parsing node number 0 [Conv -> "2"]:
ERROR: /home/onnx-tensorrt/builtin_op_importers.cpp:512 In function importConv:
[8] Assertion failed: ctx->network()->hasExplicitPrecision() && "TensorRT only supports multi-input conv for explicit precision QAT networks!"

image

I am not sure is pytorch side problem or onnx-tensorrt side problem, But I can not convert any model which contains self-defined F.conv op for example, SOLOv2.

Please help me if anyone knows how to solve it.

cc @houseroad @spandantiwari @lara-hdr @BowenBao @neginraoof @SplitInfinity

@zou3519 zou3519 added the module: onnx Related to torch.onnx label Mar 19, 2021
@heitorschueroff heitorschueroff added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Mar 22, 2021
@lucasjinreal
Copy link
Author

@heitorschueroff I guess this issue is triaged but not have a response? Any updates?

@lucasjinreal
Copy link
Author

Again, I found using pytorch 1.6 export is same. So it might be onnx-tensorrt issue

@garymm
Copy link
Collaborator

garymm commented May 4, 2022

Seems this is a limitation of onnx-tensorrt, not torch.onnx. Tracked by onnx/onnx-tensorrt#609

@garymm garymm closed this as completed May 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: onnx Related to torch.onnx triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

4 participants