-
Notifications
You must be signed in to change notification settings - Fork 213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
onnx error with different input size #22
Comments
I assume you exported to onnx with the same resolution set? If yes, I don't have an answer. The onnx ecosystem is a bit touchy, I've seen numerous breaks over the past few PyTorch and onnx version changes. TensorRT adds yet another variable. Good luck, please update if you find a solution to help others. |
if I use the feature size (105 x 105) do your designed conv with stride 2 , and the https://github.com/lukemelas/EfficientNet-PyTorch |
@alicera I'm not sure what your issue is? as far as I understand 53x53 is the correct output if the input is 105x105, stride 2, and padding is set to 'SAME' or 1, as it should be for the stride 2 convs in this network. |
https://github.com/lukemelas/EfficientNet-PyTorch and if I use the same input size (1,3,840,840) |
I export efficientnet_b0 onnx and set the 640x640 size , it will happen the error
'''
import onnx
import onnx_tensorrt.backend as backend
import numpy as np
model = onnx.load("efficientnet_b0.onnx")
engine = backend.prepare(model, device='CUDA:0')
input_data = np.random.random(size=(1, 3, 640, 640)).astype(np.float32)
output_data = engine.run(input_data)[0]
print(output_data)
print(output_data.shape)
'''
[Error]
[TensorRT] ERROR: Parameter check failed at: ../builder/Network.cpp::addPoolingNd::500, condition: allDimsGtEq(windowSize, 1) && volume(windowSize) < MAX_KERNEL_DIMS_PRODUCT
Traceback (most recent call last):
File "test_onnx.py", line 7, in
engine = backend.prepare(model, device='CUDA:0')
File "/opt/conda/lib/python3.6/site-packages/onnx_tensorrt-0.1.0-py3.6-linux-x86_64.egg/onnx_tensorrt/backend.py", line 218, in prepare
return TensorRTBackendRep(model, device, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/onnx_tensorrt-0.1.0-py3.6-linux-x86_64.egg/onnx_tensorrt/backend.py", line 94, in init
raise RuntimeError(msg)
RuntimeError: While parsing node number 8:
builtin_op_importers.cpp:1175 In function importGlobalAveragePool:
[8] Assertion failed: layer_ptr
The text was updated successfully, but these errors were encountered: