New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[8] Assertion failed: ctx->network()->hasExplicitPrecision() && "TensorRT only supports multi-input conv for explicit precision QAT networks!" #645
Comments
@mk-nvidia To be more specific, is this problem: class MG(nn.Module):
def __init__(self):
super().__init__()
# for test if torch.cat([bool, bool]) can convert
def forward(self, x, b):
preds = F.conv2d(x, b,
stride=1)
return preds
torch_model = MG()
x = torch.randn([1, 4, 24, 24])
b = torch.randn([8, 4, 3, 3])
torch_out = torch_model(x, b)
# Export the model
torch.onnx.export(torch_model, # model being run
(x, b),
"a.onnx",
export_params=True, # store the trained parameter weights inside the model file
opset_version=11, # the ONNX version to export the model to
do_constant_folding=True,
verbose=True)
print('Done!') The root reason is this op can not be exported Which we always use in many mordern models such as SOLOv2, dynamic Convolutions. |
@mk-nvidia Pls look:
|
@jinfagang TensorRT requires that the second input of the |
@jackwish
test_conv_derivative.zip Conv2d derivative source code:
Edit: I replaced the functional conv2d with nn.conv2d where I set the weights during the inference path. But unfortunately it didn't work |
I also met this issue, have you solved it ? @jinfagang |
Closing as duplicate of #609 |
Hi, I try to convert a model of onnx with normal data type float32 not QAT models. But it gives me this error message:
And I can reproduce this error with this minimal code:
If you export onnx with pytorch 1.7, and try convert to trt engine, it will shows this error:
You might will ask why using
torch.tensor(0.03, dtype=torch.float)
in>
op, it was because if not, it will cast float to double and invoke a double data type in onnx.Which will make onnx2trt raise another error called
unsupported datatype 11
.So how should we solve this awkward situation?
The text was updated successfully, but these errors were encountered: