-
Notifications
You must be signed in to change notification settings - Fork 436
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Quantization model deploy on GPU #318
Comments
Please check the docs of YOLOv7End2EndORT: This class does not support TRT backend. To support TRT::EfficientNMS_TRT op, you should use YOLOv7End2EndTRT, please refer to: export yolov7 with trt nms (let wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt
# Export onnx format file with TRT_NMS (Tips: corresponding to YOLOv7 release v0.1 code)
python export.py --weights yolov7.pt --grid --end2end --simplify --topk-all 100 --iou-thres 0.65 --conf-thres 0.35 --img-size 640 640
# The command to export other models is similar Replace yolov7.pt with yolov7x.pt yolov7-d6.pt yolov7-w6.pt ...
# When using YOLOv7End2EndTRT, you only need to provide the onnx file, no need to transfer the trt file, and it will be automatically converted during inference or download from vision/detection/yolov7end2end_trt . |
@DefTruth Thanks I noticed the problem. Another interesting thing that I quantize of the exported to onnx (small object detection / visdrone paddle) and its succesfully QUint8 and inferencin. Problem is qunatized onnx much more slower than the orignal fp32.. 10 times. Meanwhile I will give a shot to int8 chip (hailo-8) How can I quantize with calibration ? weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_80e_sliced_visdrone_640_025.pdparams Best |
Inference with CPU or GPU while you use the quantized onnx model? |
@jiangjiajun inferencing with GPU 2080Ti |
Which tool you are using to quantize your onnx model? |
@jiangjiajun
|
This quant tool is not supported by TensorRT now, Refer this doc https://onnxruntime.ai/docs/performance/quantization.html#quantization-on-gpu |
Hi, FastDeploy will provide the tools to quantize model, which could suit deployment on FastDeploy better. See current tutorials as : https://github.com/PaddlePaddle/FastDeploy/tree/develop/tools/quantization. And we will release the examples about how to deploy INT8 models (YOLO series) on FastDeploy in tow days. What model do you want to quantize and deploy on FastDeploy? We would give you supports. |
The model is : https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_80e_sliced_visdrone_640_025.pdparams We have a competition for a huge project for object detection as in above model exactly. But we need to achieve 100fps minimum on the Xavier NX at 640x480 px. Accuracy is perfect for above model. Neet to speed up and achive the performance. This is why I tried to quantize int8. I tried cpp inference 👍 But not much speedup . Only 40millisec on the 2080 rtx Ti |
We have tried to quantize ppyoloe_crn_l_300e_coco, and it works well on FastDeploy. |
@yunyaoXYY any speed improvement. |
@yunyaoXYY did you tried ppyoloe_crn_l_80e_sliced_visdrone_640_025 ? any speedup ? |
@yunyaoXYY I need and invitation to join Slack |
HI, please try this. https://join.slack.com/t/fastdeployworkspace/shared_invite/zt-1hm4rrdqs-RZEm6_EAanuwEVZ8EJsG~g |
Inferencing both, an slow.. GPU 10 times CPU 1,5 times
… On 5 Oct 2022, at 05:42, Jason ***@***.***> wrote:
@DefTruth <https://github.com/DefTruth> Thanks I noticed the problem.
Another interesting thing that I quantize of the exported to onnx (small object detection / visdrone paddle) and its succesfully QUint8 and inferencin.
Problem is qunatized onnx much more slower than the orignal fp32.. 10 times.
Meanwhile I will give a shot to int8 chip (hailo-8) To export onnx some converter asks for calibration images as in weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_80e_sliced_visdrone_640_025.pdparams <https://github.com/hailo-ai/hailo_model_zoo>
How can I quantize with calibration ? weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_80e_sliced_visdrone_640_025.pdparams <https://paddledet.bj.bcebos.com/models/ppyoloe_crn_l_80e_sliced_visdrone_640_025.pdparams>
Best
Inference with CPU or GPU while you use the quantized onnx model?
—
Reply to this email directly, view it on GitHub <#318 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AEFRZH6XSFWG62KOJCBI7V3WBTTIFANCNFSM6AAAAAAQ37AYWQ>.
You are receiving this because you authored the thread.
|
此ISSUE由于一年未更新,将会关闭处理,如有需要,可再次更新打开。 |
I have below error. model downloaded from your exported ones:
with cache or without cache enabled same error.
The text was updated successfully, but these errors were encountered: