-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stand-alone pad operation fails with: Assertion failed: inputs.at(1).is_weights() #439
Comments
Hi @copaah , As for this error:
PyTorch generated kind of a funky ONNX graph for this simple model. I don't know if this is an issue to be fixed on their part, or on the ONNX parser's part, @kevinch-nv might be able to answer that. As a workaround to your issue, you can try using
|
Above original and simplified onnx models here: onnx-models.zip |
For padding, the ONNX-TRT parser expects the padded values to be initializers (i.e constants) in the ONNX graph. I checked @rmccorm4's zip package and the nodes that were contributing to the pad dimensions were constant-folded into an initializer by the onnx-simplifier. |
@kevinch-nv thanks for the insight. I added Is there any parameter to torch.onnx.export that will correctly create the padded values as initializers?
|
Perhaps the constant-folding functionality of the current torch2onnx export doesn't support this particular structure yet. Pre-opset 11 the |
Good call @kevinch-nv ! With opset 10, whether you use default args, or set I raised an issue in Pytorch to see what they think: pytorch/pytorch#35516 |
@copaah are you able to import your model with TensorRT now? |
same issue, anyone has solved this? |
Hi @rmccorm4, after applying your instructions mentioned on issues #386 and #439 I got new error, some idea how to fix it?:
|
Half a year later, issue still persists. |
same here |
same here, wast so many time !, can any one who solved it? |
@opeide @dedoogong @wkl2013DeepVision Could you try
We have document here https://github.com/onnx/onnx-tensorrt/blob/master/docs/faq.md#common-assertion-errors thanks |
@ttyio |
Sorry @romain87400 , this command can help to fold constants to solve some |
@ttyio Thanks for your answer. Sorry I'm a beginner on the subject. |
Hello @romain87400 , I am not sure if you can adjust your model, finding this error node, change to pad with 0s, and fine-tune your model. the |
@ttyio Any update for fixing it ?? Thanks |
@phamdat09 non constant padding will be supported in next release (in around 2 months), thanks |
@ttyio Thanks for your info. But in my case, I think it is constant padding. [ERROR] [TRT] /home/bigbigboy/Documents/Test/TensortRT_Dynamic/TensorRT/parsers/onnx/ModelImporter.cpp:728: --- End node --- |
Works for me , THX ~ |
from 8.0 It will be fixed in 8.2, maybe? https://github.com/onnx/onnx-tensorrt/blob/master/builtin_op_importers.cpp#L3100-L3120 |
@copaah Could you try TRT 8.2/8.4 and see if the issue still exists? If it does, we will debug it. Thanks |
Closing for now due to >14 days without activity. Please feel free to reopen if the issue still exists. Thanks |
This is perhaps related to #847 |
Description
My current workflow is
pytorch -> onnx -> tensorrt
and I encounter an issue with thenn.ConstantPad2D
operation that results in the following error:Environment
OS: Ubuntu 18.04
torch: 1.4.0
onnx: 1.6.0
tensorrt: 7.0.0
cuda: 10.0
python: 2.7
Steps To Reproduce
Run with:
Run it through tensorrt
Which will result in the above error.
Related issues
onnx/onnx-tensorrt#378
onnx/onnx-tensorrt#411
The text was updated successfully, but these errors were encountered: