Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

YOLOv5 Model Optimization #647

Closed
1 task done
Waqas649 opened this issue Apr 18, 2024 · 5 comments
Closed
1 task done

YOLOv5 Model Optimization #647

Waqas649 opened this issue Apr 18, 2024 · 5 comments
Labels
question A HUB question that does not involve a bug Stale

Comments

@Waqas649
Copy link

Waqas649 commented Apr 18, 2024

Search before asking

Question

Hello, I trained a custom yolov5 model, now I want to deploy it, but first I want to optimize it, is there any easy way to optimize it with openvino, some months ago, I used a online toolkit to optimize it but now its not found, anyone can help pls. Thanks

Additional

No response

@Waqas649 Waqas649 added the question A HUB question that does not involve a bug label Apr 18, 2024
Copy link

👋 Hello @Waqas649, thank you for raising an issue about Ultralytics HUB 🚀! Please visit our HUB Docs to learn more:

  • Quickstart. Start training and deploying YOLO models with HUB in seconds.
  • Datasets: Preparing and Uploading. Learn how to prepare and upload your datasets to HUB in YOLO format.
  • Projects: Creating and Managing. Group your models into projects for improved organization.
  • Models: Training and Exporting. Train YOLOv5 and YOLOv8 models on your custom datasets and export them to various formats for deployment.
  • Integrations. Explore different integration options for your trained models, such as TensorFlow, ONNX, OpenVINO, CoreML, and PaddlePaddle.
  • Ultralytics HUB App. Learn about the Ultralytics App for iOS and Android, which allows you to run models directly on your mobile device.
    • iOS. Learn about YOLO CoreML models accelerated on Apple's Neural Engine on iPhones and iPads.
    • Android. Explore TFLite acceleration on mobile devices.
  • Inference API. Understand how to use the Inference API for running your trained models in the cloud to generate predictions.

If this is a 🐛 Bug Report, please provide screenshots and steps to reproduce your problem to help us get started working on a fix.

If this is a ❓ Question, please provide as much information as possible, including dataset, model, environment details etc. so that we might provide the most helpful response.

We try to respond to all issues as promptly as possible. Thank you for your patience!

@pderrenger
Copy link
Member

Hello! 😊 Great to hear about your progress with your custom YOLOv5 model.

Optimizing your model for deployment with OpenVINO can indeed streamline the inference process. While we don't have a predefined online toolkit, converting your model to an OpenVINO-compatible format involves a couple of steps, starting with exporting your model to ONNX format. After that, you can utilize the OpenVINO Model Optimizer to convert the ONNX model to an IR (Intermediate Representation) format that's optimized for inference on various Intel hardware.

For a detailed step-by-step guide, including necessary commands and further optimization tips, please refer to the "Deployment" section in our official documentation at https://docs.ultralytics.com/hub. It offers a comprehensive walkthrough that should fit your needs.

If you encounter any specific issues or have further questions during the process, feel free to reach out here again. Happy optimizing! 🚀

@iraa777
Copy link

iraa777 commented May 15, 2024

Hello,

I am trying to optimize a model I self-trained based on Yolov5s architecture. The device I am trying to run it on has no GPU and is not NVIDIA so I cannot use TensorRT. I originally tried a quantization code, but this is not optimizing my model at all. I attach the code here:
import torch
import torch.quantization
from utils.torch_utils import select_device
from models.common import DetectMultiBackend
from models.experimental import attempt_load
from torch.quantization import QuantStub, DeQuantStub, default_dynamic_qconfig, default_qconfig
from torch.utils.mobile_optimizer import optimize_for_mobile

Define GPU

device = select_device('')

Check original model

DetectMultiBackend(weights="20230329_s.pt", device=device, dnn=False, data='data.yaml', fp16=False)

Load your YOLOv5 model

ori_model = torch.load("20230329_s.pt", map_location = device)
print(ori_model.keys())
print(1)

Supongamos que el modelo está bajo la clave 'model'

model = ori_model['model']

Prepare your model for quantization

model.eval()

Define quantization configuration targeting all layers

qconfig = default_qconfig
qconfig_dict = {'': qconfig}

Apply quantization to the model

quantized_model = torch.quantization.quantize_dynamic(
model, # Your model
qconfig_spec=qconfig_dict, # Quantization configuration
dtype=torch.qint8 # Target data type after quantization
)
print(2)

Edit model field on original model

ori_model['model'] = quantized_model

Save the quantized model

torch.save(ori_model, "quantized.pt")
print(3)
I would appreciate if anyone has had experience with this. Maybe tehre is a tool available that I don't have knowledge about that can make my work much easier.

Thankyou,

Irati

@pderrenger
Copy link
Member

Hello Irati,

It sounds like you've given a good initial attempt at quantizing your YOLOv5 model! Quantization can indeed be tricky depending on the specific characteristics of the model and the target device's requirements.

Since you’re facing issues with standard dynamic quantization not effectively optimizing your model, you might consider trying static quantization which involves a few additional steps like preparing calibration data to better understand the distribution of inputs. This approach can sometimes yield better performance outcomes, especially if dynamic quantization doesn't meet your expectations.

Another alternative might be to explore pruning before quantization, which reduces the model size and complexity by removing unnecessary weights, potentially making the quantization more effective.

If these approaches don’t suit your needs, looking into other hardware-specific libraries compatible with your device's architecture (other than TensorRT) could be beneficial. Some devices have specialized libraries or SDKs designed to optimize models specifically for their architecture.

Keep experimenting and don’t hesitate to reach out if you have more questions! 💪

Copy link

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐

@github-actions github-actions bot added the Stale label Jun 15, 2024
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question A HUB question that does not involve a bug Stale
Projects
None yet
Development

No branches or pull requests

3 participants