Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

can you export onnx? #2

Open
you-old opened this issue Sep 18, 2019 · 10 comments
Open

can you export onnx? #2

you-old opened this issue Sep 18, 2019 · 10 comments

Comments

@you-old
Copy link

you-old commented Sep 18, 2019

can you export onnx?

@biubug6
Copy link
Owner

biubug6 commented Sep 18, 2019

can you export onnx?

It's easy. I'll provide it later .

@piotr-anyvision
Copy link

@biubug6 If you could do it, that would be amazing!

@SnowRipple
Copy link

@biubug6 Do you have any timetable for the onnx model please?

@biubug6
Copy link
Owner

biubug6 commented Oct 26, 2019

@SnowRipple @wangpupanjing So sorry to have kept you waiting. Now I provide script "convert_to_onnx.py" to export onnx.

@SnowRipple
Copy link

I think I had the same error, the problem was with onnx not pytorch, just updated onnx (pytorch can have different onnx version) ;)

@SnowRipple
Copy link

Yes, but it was a while ago so I don't remember specifics - it is known problem with onnx, but there was simple fix - try google;)

@121649982
Copy link

can you tell me how to load .onnx model with tensor RT , thank you very much.

I load the model like this :
std::string dataDirs = "E:/gc/Pytorch_Retinaface-master";
std::vectorstd::string dir;
dir.push_back(dataDirs);
auto parsed = parser->parseFromFile(
locateFile("FaceDetector.onnx", dir).c_str(), static_cast(gLogger.getReportableSeverity()));
if (!parsed)
{
return false;
}

but,I get this errors:

sorRT does not natively support INT64. Attempting to cast down to INT32.
[01/07/2020-19:19:10] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/07/2020-19:19:10] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/07/2020-19:19:10] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/07/2020-19:19:10] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/07/2020-19:19:10] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/07/2020-19:19:11] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/07/2020-19:19:11] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
While parsing node number 106 [Upsample]:
ERROR: builtin_op_importers.cpp:3240 In function importUpsample:
[8] Assertion failed: scales_input.is_weights()

@xsacha
Copy link
Contributor

xsacha commented Mar 3, 2020

I think torch2trt is better than going pytorch -> onnx -> tensorrt

@zzxaijs
Copy link

zzxaijs commented Apr 11, 2020

[01/07 / 2020-19:19:11] [W] [TRT] onnx2trt_utils.cpp:198:您的ONNX模型是使用INT64权重生成的,而TensorRT本身不支持INT64。尝试转换为INT32。
解析节点号106 [Upsample]时:
错误:builtin_op_importers.cpp:3240在函数importUpsample中:
[8]断言失败:scales_input.is_weights(),同样的这个错怎么解决

@denisvmedyantsev
Copy link

denisvmedyantsev commented Sep 4, 2020

I faced the problem of the pytorch -> onnx -> tensorrt approach as above. I used simplifier, it helped, but I found a new problem: the output is different for different batch size for the trt engine. Fix the interpolation instead of using the simplifier in

up3 = F.interpolate(output3, size=[output2.size(2), output2.size(3)], mode="nearest")
and
up2 = F.interpolate(output2, size=[output1.size(2), output1.size(3)], mode="nearest")
It helped. Just cast size to int, eg. F.interpolate(output3, size=[output2.size(2), output2.size(3)], mode="nearest") => F.interpolate(output3, size=(int(output2.size(2)), int(output2.size(3))), mode="nearest")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants