-
-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
YOLOv5 Model Optimization #647
Comments
👋 Hello @Waqas649, thank you for raising an issue about Ultralytics HUB 🚀! Please visit our HUB Docs to learn more:
If this is a 🐛 Bug Report, please provide screenshots and steps to reproduce your problem to help us get started working on a fix. If this is a ❓ Question, please provide as much information as possible, including dataset, model, environment details etc. so that we might provide the most helpful response. We try to respond to all issues as promptly as possible. Thank you for your patience! |
Hello! 😊 Great to hear about your progress with your custom YOLOv5 model. Optimizing your model for deployment with OpenVINO can indeed streamline the inference process. While we don't have a predefined online toolkit, converting your model to an OpenVINO-compatible format involves a couple of steps, starting with exporting your model to ONNX format. After that, you can utilize the OpenVINO Model Optimizer to convert the ONNX model to an IR (Intermediate Representation) format that's optimized for inference on various Intel hardware. For a detailed step-by-step guide, including necessary commands and further optimization tips, please refer to the "Deployment" section in our official documentation at https://docs.ultralytics.com/hub. It offers a comprehensive walkthrough that should fit your needs. If you encounter any specific issues or have further questions during the process, feel free to reach out here again. Happy optimizing! 🚀 |
Hello, I am trying to optimize a model I self-trained based on Yolov5s architecture. The device I am trying to run it on has no GPU and is not NVIDIA so I cannot use TensorRT. I originally tried a quantization code, but this is not optimizing my model at all. I attach the code here: Define GPUdevice = select_device('') Check original modelDetectMultiBackend(weights="20230329_s.pt", device=device, dnn=False, data='data.yaml', fp16=False) Load your YOLOv5 modelori_model = torch.load("20230329_s.pt", map_location = device) Supongamos que el modelo está bajo la clave 'model'model = ori_model['model'] Prepare your model for quantizationmodel.eval() Define quantization configuration targeting all layersqconfig = default_qconfig Apply quantization to the modelquantized_model = torch.quantization.quantize_dynamic( Edit model field on original modelori_model['model'] = quantized_model Save the quantized modeltorch.save(ori_model, "quantized.pt") Thankyou, Irati |
Hello Irati, It sounds like you've given a good initial attempt at quantizing your YOLOv5 model! Quantization can indeed be tricky depending on the specific characteristics of the model and the target device's requirements. Since you’re facing issues with standard dynamic quantization not effectively optimizing your model, you might consider trying static quantization which involves a few additional steps like preparing calibration data to better understand the distribution of inputs. This approach can sometimes yield better performance outcomes, especially if dynamic quantization doesn't meet your expectations. Another alternative might be to explore pruning before quantization, which reduces the model size and complexity by removing unnecessary weights, potentially making the quantization more effective. If these approaches don’t suit your needs, looking into other hardware-specific libraries compatible with your device's architecture (other than TensorRT) could be beneficial. Some devices have specialized libraries or SDKs designed to optimize models specifically for their architecture. Keep experimenting and don’t hesitate to reach out if you have more questions! 💪 |
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help. For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLO 🚀 and Vision AI ⭐ |
Search before asking
Question
Hello, I trained a custom yolov5 model, now I want to deploy it, but first I want to optimize it, is there any easy way to optimize it with openvino, some months ago, I used a online toolkit to optimize it but now its not found, anyone can help pls. Thanks
Additional
No response
The text was updated successfully, but these errors were encountered: