Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FP16 Mode support #10

Open
PhuongNDVN opened this issue Nov 17, 2022 · 0 comments
Open

FP16 Mode support #10

PhuongNDVN opened this issue Nov 17, 2022 · 0 comments

Comments

@PhuongNDVN
Copy link

PhuongNDVN commented Nov 17, 2022

Thank you for sharing this project. It's really helpful to me.

I tested examples without any issue with engine fp32. However, I'm getting issue with engine fp16. I can run normally but there no detection in the result. The engine with fp16, I built from yolov5 github with --half option. I think it comes from
DeviceMemory::setup() and CvCpuPreprocessor::process() because input should have type of 2 bytes, not 4 bytes

Updated:
I updated the case when I succeeded and failed to get correct result. onnx is generated by yolov5-hub (fp16 generated by using --half option):

  • onnx fp32 -> TensorRT fp32 (generated by yolov5-hub | this hub | nvidia container): success
  • onnx (fp32) -> tensorrt fp16 (generated by this hub | nvidia container): success
  • tensorrt fp16 (generated by yolov5-hub with --half option): failed
  • onnx fp16 -> tensorrt fp16 (generated by this hub | nvidia container): failed

I think the issue for mode fp16 related to onnx-fp16. Please consider this issue when you have time. Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant