New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Converting multi-input conv layer to tensorrt failed. #609
Comments
@weizhiyi777 No, you have bug in your code, and onnx2trt currently not support multi-input conv. See my issue you can reproduce, once you exported right onnx model, you will got error like this:
|
We only support multi-input convs for quantized networks. What is the use case of this conv? |
Hi @kevinch-nv multi-input convs are used in SOLOv2 network, which is a instance semantic segmentation network. One of its innovation is using features to predict weights of convs. These weights are used for subsequent conv operations. Details could be found here: [Paper] https://arxiv.org/abs/2003.10152 I think Tensorrt could support multi-input conv ops. It would be very helpful for onnx-tenssort to support that, because more and more networks start to predict conv weight. |
@weizhiyi777 Some people able convert SOLOv2 to tensorrt: https://mp.weixin.qq.com/s/gk3Rq2kmZ159gZdYNGGvMA |
@jinfagang Thanks a lot! I saw this probelm was deal with by other solutions. I will try that. |
I also meet this problem when I use F.conv2d. Can you tell me how to solve it? @jinfagang @weizhiyi777 |
@Xiaoyw1998 I didn't solve that yet but in this link maybe you could get some inspiration: https://mp.weixin.qq.com/s/gk3Rq2kmZ159gZdYNGGvMA. |
Thank you for your reply. But I want to use 7x7 conv, it cannot be replaced by matrix multiplication. |
@weizhiyi777 @jinfagang @Xiaoyw1998 Have you solved the problem with Multi Input Convolution? |
No |
This is still a known limitation inside TensorRT. We are planning to support this in a future release of TensorRT, in the meantime it's recommended to export your models with static conv weights if possible. |
in my situation, using torch.nn.conv2d instead of F.conv2d . It seems that when using nn.conv2d, you need to initialize in_channels and out_channels,the onnxparser knows how to deal with it. But F.conv2d gives the kernel weights by hands. |
cause data input and kernel wieghts are two inputs |
This dynamic weights feature will be supported in next release. thanks! |
Is there any update on the convolution with dynamic weight feature? |
+1 on the convolution with dynamic weights @ttyio |
Hello @kevinch-nv. First of all, thank you very much for contributing to the open-source community! Quick question for you: is there a branch we could pull to use this feature for "early" access instead of waiting for the next release? If not, when do you expect the next release to ship? |
+1 problem with convolution dynamic weights. I think write a custom operator in tensorrt plugin may solve this problem. |
@erfaneshrati @rocco-haro @wuyunnben Thank you for your patient. The release will happen in one month. |
+1 waiting for dynamic weight convolution to accelerate inference time ^-^ |
Guys, thank you for your patient! Please check the latest TensorRT 8.5.1 release. You can find the header description here: https://github.com/NVIDIA/TensorRT/blob/main/include/NvInfer.h#L1442 |
Thank you @zhenhuaw-me ! |
Closing this issue since this feature has been released. Feel free to reopen if any further questions. Thanks! |
So now how do I convert F.conv2d to tensorrt? |
@lansfair If you are looking for convert from PyTorch to TensorRT directly, you might try https://github.com/pytorch/TensorRT; otherwise you can export PyTorch to ONNX and let TensorRT loads the ONNX model. |
Hello guys,
I try converting a model (pytorch -> onnx -> tensorrt) with one multi-input conv layer. But it failed :(
Here is the script of converting pytorch model to onnx model:
I also use onnx python API to print some onnx model info:
When I used onnx2trt tool to convert this onnx model to tensorrt engine, I got the following error:
I have read the code and it seems onnx-tensorrt actually could support multi-input conv layer. Could you help to look at this issue?
What's more, the version info is as follows:
pytorch: 1.4.0
tensorrt: 7.1.2.8
onnx-tensorrt: 7.1.0
Thanks a lot!
Best regards,
Wei Zhiyi
The text was updated successfully, but these errors were encountered: