You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I have a yolov4 model, that I want to run on TensorRT INT8. I read the documentation but having a hard time following it as an English speaker. Can you please guide me on how do I convert the model and prepared dataset for ProgramEntrance.py script. I have dataset in Yolo format.
Thanks
The text was updated successfully, but these errors were encountered:
You will get a file named Quantized.onnx and a file named Quantized.json eventually, use script here for generating the executable TensorRT engine.
Notice that engine is not portable, that is to say once binary engine is generated, you are not supposed to move it to elsewhere. The only way for transporting a quantized model is sending Quantized.onnx and Quantized.json and generating the engine again.
Hi, I have a yolov4 model, that I want to run on TensorRT INT8. I read the documentation but having a hard time following it as an English speaker. Can you please guide me on how do I convert the model and prepared dataset for ProgramEntrance.py script. I have dataset in Yolo format.
Thanks
The text was updated successfully, but these errors were encountered: