Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Converting multi-input conv layer to tensorrt failed. #609

Closed
weizhiyi777 opened this issue Jan 4, 2021 · 25 comments
Closed

Converting multi-input conv layer to tensorrt failed. #609

weizhiyi777 opened this issue Jan 4, 2021 · 25 comments
Assignees
Labels
enhancement New feature or request triaged Issue has been triaged by maintainers

Comments

@weizhiyi777
Copy link

weizhiyi777 commented Jan 4, 2021

Hello guys,

I try converting a model (pytorch -> onnx -> tensorrt) with one multi-input conv layer. But it failed :(
Here is the script of converting pytorch model to onnx model:

import torch
import torch.nn.functional as F

class my_conv_model(torch.nn.Module):

    def __init__(self):
        super(my_conv_model, self).__init__()

    def forward(self, input):
        kernel = torch.rand(16, 3, 1, 1)
        output = F.conv2d(input, kernel, stride=1)
        return output

if __name__ == "__main__":
    net = my_conv_model()
    input_tensor = torch.rand(1, 3, 1024, 1728)
    
    input_names, output_names = [ "input_onnx"], [ "output_onnx" ]
    torch.onnx.export(net, input_tensor, "test.onnx", verbose=True, input_names=input_names, output_names=output_names, opset_version=10)

I also use onnx python API to print some onnx model info:

output: "1"
op_type: "RandomUniform"
attribute {
  name: "shape"
  ints: 16
  ints: 3
  ints: 1
  ints: 1
  type: INTS
}

input: "input_onnx"
input: "1"
output: "output_onnx"
op_type: "Conv"
attribute {
  name: "dilations"
  ints: 1
  ints: 1
  type: INTS
}
attribute {
  name: "group"
  i: 1
  type: INT
}
attribute {
  name: "kernel_shape"
  ints: 1
  ints: 1
  type: INTS
}
attribute {
  name: "pads"
  ints: 0
  ints: 0
  ints: 0
  ints: 0
  type: INTS
}
attribute {
  name: "strides"
  ints: 1
  ints: 1
  type: INTS
}

When I used onnx2trt tool to convert this onnx model to tensorrt engine, I got the following error:

----------------------------------------------------------------
Input filename:   /root/data/Commonly_Used_Files/Model/solov2/onnx/solov2_test.onnx
ONNX IR version:  0.0.4
Opset version:    10
Producer name:    pytorch
Producer version: 1.3
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
Parsing model
Building TensorRT engine, FP16 available:1
    Max batch size:     32
    Max workspace size: 1024 MiB
[2021-01-04 03:20:29   ERROR] _0: kernel weights has count 0 but 48 was expected
[2021-01-04 03:20:29   ERROR] _0: count of 0 weights in kernel, but kernel dimensions (1,1) with 3 input channels, 16 output channels and 1 groups were specified. Expected Weights count is 3 * 1*1 * 16 / 1 = 48
[2021-01-04 03:20:29   ERROR] Layer _0 failed validation
[2021-01-04 03:20:29   ERROR] Network validation failed.

I have read the code and it seems onnx-tensorrt actually could support multi-input conv layer. Could you help to look at this issue?

What's more, the version info is as follows:

pytorch: 1.4.0
tensorrt: 7.1.2.8
onnx-tensorrt: 7.1.0

Thanks a lot!

Best regards,
Wei Zhiyi

@lucasjinreal
Copy link
Contributor

@weizhiyi777 No, you have bug in your code, and onnx2trt currently not support multi-input conv.

See my issue you can reproduce, once you exported right onnx model, you will got error like this:

[8] Assertion failed: ctx->network()->hasExplicitPrecision() && "TensorRT only supports multi-input conv for explicit precision QAT networks!"

@kevinch-nv kevinch-nv added triaged Issue has been triaged by maintainers enhancement New feature or request labels May 10, 2021
@kevinch-nv
Copy link
Collaborator

We only support multi-input convs for quantized networks. What is the use case of this conv?

@weizhiyi777
Copy link
Author

We only support multi-input convs for quantized networks. What is the use case of this conv?

Hi @kevinch-nv multi-input convs are used in SOLOv2 network, which is a instance semantic segmentation network. One of its innovation is using features to predict weights of convs. These weights are used for subsequent conv operations. Details could be found here:

[Paper] https://arxiv.org/abs/2003.10152
[Github Repository] https://github.com/WXinlong/SOLO

I think Tensorrt could support multi-input conv ops. It would be very helpful for onnx-tenssort to support that, because more and more networks start to predict conv weight.

@lucasjinreal
Copy link
Contributor

@weizhiyi777 Some people able convert SOLOv2 to tensorrt: https://mp.weixin.qq.com/s/gk3Rq2kmZ159gZdYNGGvMA

@weizhiyi777
Copy link
Author

@jinfagang Thanks a lot! I saw this probelm was deal with by other solutions. I will try that.

@Xiaoyw1998
Copy link

I also meet this problem when I use F.conv2d. Can you tell me how to solve it? @jinfagang @weizhiyi777

@weizhiyi777
Copy link
Author

@Xiaoyw1998 I didn't solve that yet but in this link maybe you could get some inspiration: https://mp.weixin.qq.com/s/gk3Rq2kmZ159gZdYNGGvMA.
In this blog, the author use matrix multiplication to replace F.conv2d, which could be supported by onnx. I think you could try this solution if you have time.

@Xiaoyw1998
Copy link

@Xiaoyw1998 I didn't solve that yet but in this link maybe you could get some inspiration: https://mp.weixin.qq.com/s/gk3Rq2kmZ159gZdYNGGvMA.
In this blog, the author use matrix multiplication to replace F.conv2d, which could be supported by onnx. I think you could try this solution if you have time.

Thank you for your reply. But I want to use 7x7 conv, it cannot be replaced by matrix multiplication.

@furkancoskun
Copy link

@weizhiyi777 @jinfagang @Xiaoyw1998 Have you solved the problem with Multi Input Convolution?

@Xiaoyw1998
Copy link

@weizhiyi777 @jinfagang @Xiaoyw1998 Have you solved the problem with Multi Input Convolution?

No

@kevinch-nv
Copy link
Collaborator

This is still a known limitation inside TensorRT. We are planning to support this in a future release of TensorRT, in the meantime it's recommended to export your models with static conv weights if possible.

@monsterlyg
Copy link

in my situation, using torch.nn.conv2d instead of F.conv2d . It seems that when using nn.conv2d, you need to initialize in_channels and out_channels,the onnxparser knows how to deal with it. But F.conv2d gives the kernel weights by hands.

@monsterlyg
Copy link

in my situation, using torch.nn.conv2d instead of F.conv2d . It seems that when using nn.conv2d, you need to initialize in_channels and out_channels,the onnxparser knows how to deal with it. But F.conv2d gives the kernel weights by hands.

cause data input and kernel wieghts are two inputs

@ttyio
Copy link

ttyio commented Jul 13, 2022

This dynamic weights feature will be supported in next release. thanks!

@erfaneshrati
Copy link

Is there any update on the convolution with dynamic weight feature?

@rocco-haro
Copy link

+1 on the convolution with dynamic weights @ttyio

@rocco-haro
Copy link

Hello @kevinch-nv. First of all, thank you very much for contributing to the open-source community!

Quick question for you: is there a branch we could pull to use this feature for "early" access instead of waiting for the next release? If not, when do you expect the next release to ship?

@wuyunnben
Copy link

+1 problem with convolution dynamic weights. I think write a custom operator in tensorrt plugin may solve this problem.

@zhenhuaw-me
Copy link
Member

@erfaneshrati @rocco-haro @wuyunnben Thank you for your patient. The release will happen in one month.

@zhenhuaw-me zhenhuaw-me self-assigned this Oct 25, 2022
@mangoyuan
Copy link

+1 waiting for dynamic weight convolution to accelerate inference time ^-^

@zhenhuaw-me
Copy link
Member

Guys, thank you for your patient! Please check the latest TensorRT 8.5.1 release. You can find the header description here: https://github.com/NVIDIA/TensorRT/blob/main/include/NvInfer.h#L1442

@rocco-haro
Copy link

Thank you @zhenhuaw-me !

@zhenhuaw-me
Copy link
Member

Closing this issue since this feature has been released. Feel free to reopen if any further questions. Thanks!

@lansfair
Copy link

lansfair commented Jan 4, 2023

So now how do I convert F.conv2d to tensorrt?

@zhenhuaw-me
Copy link
Member

@lansfair If you are looking for convert from PyTorch to TensorRT directly, you might try https://github.com/pytorch/TensorRT; otherwise you can export PyTorch to ONNX and let TensorRT loads the ONNX model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request triaged Issue has been triaged by maintainers
Projects
None yet
Development

No branches or pull requests