Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I met a problem in transforming yolov4-608 to INT8 format. #384

Closed
EthanGuan opened this issue Mar 28, 2021 · 6 comments
Closed

I met a problem in transforming yolov4-608 to INT8 format. #384

EthanGuan opened this issue Mar 28, 2021 · 6 comments

Comments

@EthanGuan
Copy link

verbose

[TensorRT] VERBOSE: Layer: 161_convolutional Weights: 1044480 HostPersistent: 0 DevicePersistent: 0
[TensorRT] VERBOSE: Layer: (Unnamed Layer* 506) [PluginV2IOExt] Weights: 0 HostPersistent: 16 DevicePersistent: 0
[TensorRT] VERBOSE: Layer: (Unnamed Layer* 507) [PluginV2IOExt] Weights: 0 HostPersistent: 16 DevicePersistent: 0
[TensorRT] VERBOSE: Layer: (Unnamed Layer* 508) [PluginV2IOExt] Weights: 0 HostPersistent: 16 DevicePersistent: 0
[TensorRT] VERBOSE: Total Host Persistent Memory: 63456
[TensorRT] VERBOSE: Total Device Persistent Memory: 13814784
[TensorRT] VERBOSE: Total Weight Memory: 250931200
[TensorRT] VERBOSE: Builder timing cache: created 85 entries, 1044 hit(s)
[TensorRT] VERBOSE: Engine generation completed in 70.9201 seconds.
[TensorRT] VERBOSE: Calculating Maxima
[TensorRT] INFO: Starting Calibration.
[TensorRT] INFO: Post Processing Calibration data in 3.744e-06 seconds.
Traceback (most recent call last):
File "onnx_to_tensorrt.py", line 195, in
main()
File "onnx_to_tensorrt.py", line 184, in main
args.model, args.category_num, args.int8, args.dla_core, args.verbose)
File "onnx_to_tensorrt.py", line 154, in build_engine
engine = builder.build_engine(network, config)
IndexError: _Map_base::at

@jkjung-avt
Copy link
Owner

This seems to be an issue of ONNX. Reference: onnx/onnx#2458

If you generate the ONNX from a PyTorch model, I think you could try this fix: onnx/onnx#2417 (comment). For example,

    torch.onnx.export(model, dummy_input, “yolov4-608.onnx”, export_params=True, keep_initializers_as_inputs=True, verbose=True)

@EthanGuan
Copy link
Author

The model is downloaded from the original darknet project, and I use your protobuf install script, onnx 1.4.1.

Perhaps it is a package conflict? I installed pytorch on xavier from [(https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-8-0-now-available/72048)]

Thank you for your quick reply.

@jkjung-avt
Copy link
Owner

Check if your onnx 1.4.1 has been replaced by a newer version (during the process of installing pytorch or something else). Otherwise, I have not met this problem myself. It would be difficult for me to check this issue further...

@EthanGuan
Copy link
Author

Hi, JK

I still cannot handle this issue. I uninstalled pytorch, and make sure when I run onnx_to_tensorrt.py, the ONNX version is 1.4.1.

But the DLA FP16 works for me. On yolov4-608x608, it has 4FPS.

@jkjung-avt
Copy link
Owner

IndexError: _Map_base::at

I cannot reproduce/debug this issue, so I don't have further suggestions now.

@jkjung-avt
Copy link
Owner

Closing this issue as "unreproducible".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants