Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to run yolov8 on OpenVino? #191

Closed
1 task done
dhaval-zala-aivid opened this issue Jan 10, 2023 · 61 comments
Closed
1 task done

How to run yolov8 on OpenVino? #191

dhaval-zala-aivid opened this issue Jan 10, 2023 · 61 comments
Labels
question Further information is requested Stale

Comments

@dhaval-zala-aivid
Copy link

Search before asking

Question

I am using the same script of yolov5 to run yolov8 on openvino but its not working. So, how to run yolov8 on OpenVino

Additional

No response

@dhaval-zala-aivid dhaval-zala-aivid added the question Further information is requested label Jan 10, 2023
@dhaval-zala
Copy link

dhaval-zala commented Feb 2, 2023

Thanks

I run yolov8n int8 IR converted model using this notebook, it consume 40% of my CPU on 10 FPS live stream. But if I restrict thread using taskset command it consume only 8-10% CPU on the same stream. So, why it is required to restrict CPU and how I can fix this?

@adrianboguszewski
Copy link
Contributor

@yury-gorbachev, any idea who can help with that?

@yury-gorbachev
Copy link

@dmitry-gorokhov or @wangleis can you guys look at this question?

@wangleis
Copy link

wangleis commented Feb 2, 2023

@dhaval-zala-aivid @dhaval-zala Could you please share the full command which restrict thread using taskset command?

@wangleis
Copy link

wangleis commented Feb 3, 2023

@dhaval-zala-aivid @dhaval-zala Could you please share log of lscpu and lscpu -e as well?

@dhaval-zala
Copy link

Im using taskset -c 0 python scripy.py to restrict CPU

lscpu and lscpu -e

Screenshot from 2023-02-03 17-43-07
Screenshot from 2023-02-03 17-43-50

@wangleis
Copy link

wangleis commented Feb 3, 2023

@dhaval-zala-aivid @dhaval-zala OpenVINO uses all CPU resources provided for application for inference. But when there is not enough input, the CPU will idle.

yolo_openvino_demo.py you used loaded network to OpenVINO with 2 infer request. The following benchmark app command also loads network to OpenVINO with 2 infer request. Run this command with yolov8n int8 IR converted by 230-yolov8-optimization.ipynb on 4 cores 8 CPUs platform, the throughput performance is 20.31 FPS.

  • benchmark_app -d CPU -m openvino_notebooks/notebooks/230-yolov8-optimization/yolov8n_openvino_model/yolov8n.xml -shape [1,3,640,640] -nireq 2

If run benchmark app in throughput mode as below comments, the throughput performance can achieve 30.04 FPS.

  • benchmark_app -d CPU -m openvino_notebooks/notebooks/230-yolov8-optimization/yolov8n_openvino_model/yolov8n.xml -shape [1,3,640,640] -hint throughput

So 10 FPS live stream cannot provide enough inputs in you case. Please try the two benchmark app commands, all CPU resource on you platform will be used.

@HENNESSYxie
Copy link

HENNESSYxie commented Feb 28, 2023

Hey @dhaval-zala-aivid, please try this notebook: https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/230-yolov8-optimization/230-yolov8-optimization.ipynb

I have tried this notebook for inference my model, but I'm recieving error:
`RuntimeError Traceback (most recent call last)
in
32
33 input_image = np.array(Image.open(IMAGE_PATH))
---> 34 detections = detect(input_image, det_compiled_model)[0]
35 image_with_boxes = draw_results(detections, input_image, label_map)
36

2 frames
/usr/local/lib/python3.8/dist-packages/ultralytics/yolo/utils/ops.py in non_max_suppression(prediction, conf_thres, iou_thres, classes, agnostic, multi_label, labels, max_det, nc, max_time_img, max_nms, max_wh)
198
199 t = time.time()
--> 200 output = [torch.zeros((0, 6 + nm), device=prediction.device)] * bs
201 for xi, x in enumerate(prediction): # image index, image inference
202 # Apply constraints

RuntimeError: Trying to create tensor with negative dimension -73: [0, -73]`

@eaidova
Copy link

eaidova commented Mar 1, 2023

@HENNESSYxie if I correctly understand, does your model trained on a different dataset? there is a parameter inside detect function, when calling non_maximum_suppression, nc - it is responsible for a number of classes known by the model, please try to modify it according to your model supported a number of classes

@HENNESSYxie
Copy link

@HENNESSYxie if I correctly understand, does your model trained on a different dataset? there is a parameter inside detect function, when calling non_maximum_suppression, nc - it is responsible for a number of classes known by the model, please try to modify it according to your model supported a number of classes

Its worked for me. Thanks!

@akashAD98
Copy link
Contributor

@wangleis @adrianboguszewski is there any simple script to run yolov8 openvino model?
the notebook which you mentioned is very big & in order to run file need to run all notebook.

model = YOLO("yolov8n.pt")
start_time = time.time()
src='/content/inputvideo.mp4'
results = model.track(source=src, tracker="botsort.yaml", verbose=False, save=True)

i just want to use openvino weights insted of yolov8.pt , its possible to get that script ?

something like

from openvino.runtime import Core
import collections

core = Core()
# read converted model
#pat of object detector
model = core.read_model("object_detector/best_weapons22_int8_IRnew.xml")
# load model on CPU device
model = core.compile_model(model, 'CPU')

@adrianboguszewski
Copy link
Contributor

I see. We don't have a simple script for that, but you can look here for the simpler solution.

@glenn-jocher
Copy link
Member

glenn-jocher commented Mar 31, 2023

@akashAD98 you don't need any external notebook or custom script to run OpenVINO models, you run them with the ultralytics package just like any other export format, i.e.:

CLI

yolo predict model=yolov8n_openvino_model/
yolo predict model=yolov8n.onnx
yolo predict model=yolov8n.engine
# ... etc.

Python

from ultraltyics import YOLO

model = YOLO('yolov8n_openvino_model/')
results = model(img)

See https://docs.ultralytics.com/modes/export for details

@akashAD98
Copy link
Contributor

i tested with tracker & this is the speed im getting, ill try yolov8s_int format for getting more fps.
@glenn-jocher thanks a lot , you're always supporting & helping.

yolov8n.xml --> Speed: 0.8ms preprocess, 130.6ms inference, 1.2ms postprocess per image at shape (1, 3, 640, 640)
yolov8.pt --> Speed: 0.6ms preprocess, 130.7ms inference, 1.0ms postprocess per image at shape (1, 3, 640, 640)
yolov8s.xml. --> Speed: 1.0ms preprocess, 396.0ms inference, 1.3ms postprocess per image at shape (1, 3, 640, 640
yolov8s.pt -->Speed: 0.5ms preprocess, 416.1ms inference, 1.1ms postprocess per image at shape (1, 3, 640, 640)


@akashAD98
Copy link
Contributor

#1735

@glenn-jocher
Copy link
Member

@akashAD98 OpenVINO should show speedups on Intel CPUs, i.e. if you look at the CI benchmarks here. On other CPUs ONNX may be faster:

Screenshot 2023-03-31 at 17 18 26

@akashAD98
Copy link
Contributor

akashAD98 commented Apr 3, 2023

@glenn-jocher its fp32 openvino model or int8.xml model?
or if its fp32 model, is there any way to convert it into int8 format?

@akashAD98
Copy link
Contributor

akashAD98 commented Apr 4, 2023

@glenn-jocher when i didi the fps check this is how im getting, i tried it on 16 core macine & yolov8s.pt has higher fps than openvino yolov8s.xml file, may i know why its like this?
also yolov8.pt models using 384*640 size during predictions -may i know why its like this?

image

@glenn-jocher
Copy link
Member

@akashAD98 It's curious that you are getting better results with yolov8s.pt than with openvino yolov8s.xml on an Intel CPU, because the OpenVINO tools are optimized specifically for Intel hardware. However, there may be other factors that affect the results, such as the size of the model, the preprocessing, or the batch size.

Regarding your second question, most YOLO models do not require input images that are multiples of 32, but instead accept any image size that is divisible by two. 384x640 is probably used here as an arbitrary image size that gives good results.

@akashAD98
Copy link
Contributor

@adrianboguszewski is there any way to improve fps using yolo code & openvino weights? like passing the number of threads , asynchronous queue or any other methods which we can integrate with current Yolo code?

@adrianboguszewski
Copy link
Contributor

Yes. We created a new notebook showing how to improve the performance in OpenVINO. It hasn't been merged yet, but you can see some tricks here

@glenn-jocher
Copy link
Member

@akashAD98 Yes, there are several ways to improve the performance of OpenVINO models. One of them is to use number of threads, i.e. you can try to set the num_threads parameter to optimize your hardware. Another way is to use asynchronous inference, which allows GPU to work overlap with CPU. Yet another method is to use OpenVINO's own performance testing tools, such as the benchmark_app tool, in order to optimize your models.

@akashAD98
Copy link
Contributor

@glenn-jocher i want to add this code , is there nay way to customise yolo code?
im directlyy loading openvino modelmbut i want to add few parameters to it,

from ultraltyics import YOLO
model = YOLO('yolov8n_openvino_model/')
results = model(img)

below are the some improvment things i want to try

num_cores = os.cpu_count()
ov_cpu_config_model = core.compile_model(ov_model, device_name="CPU", config={"INFERENCE_NUM_THREADS": num_cores})
core = Core()
model = core.read_model("object_detector/yolv8.xml)
setting = {"INFERENCE_NUM_THREADS":"16", "AFFINITY":"NUMA"}
model = core.compile_model(model=model, device_name="CPU", config=setting)

@glenn-jocher
Copy link
Member

@akashAD98 Yes, you can customize the YOLO code to add features such as setting the number of threads or tuning other parameters. One way to do this is to modify the YOLO code directly and add functions that allow you to do this customization, or you could create a new class that inherits from the YOLO class and add the new features there. Just be careful when you modify the code, because it might affect other parts of the YOLO codebase. The safer way to add these features is by creating a wrapper function that loads the OpenVINO model and then sets the desired parameters before calling the YOLO function with the loaded model.

@akashAD98
Copy link
Contributor

akashAD98 commented Apr 4, 2023

@glenn-jocher i was using int8_format of openvino weight . i compaired fps it with yolov8_openvino weight & im exactly getting same fps, isexport='openvino' bydefault converting .pt weights into int8 format?

also in the export of openvino model there is metadata.yaml file is generating
even if i pass batch=10,its not reflect any changes ,getting exactly same fps

image

@glenn-jocher
Copy link
Member

@akashAD98 The export parameter in the YOLO class specifies how the model should be exported. If you set it to openvino, the YOLO class will convert the model to OpenVINO format, and by default, it will use the int8 data format. However, you can change this behavior by specifying the precision parameter when exporting the model. The metadata.yaml file is used to store information about the exported model, such as its input and output shapes, batch size, and precision. If you're not seeing any changes in FPS when changing the batch size, it could be because the model is being executed with the same set of inputs. You may need to modify the input data to reflect the new batch size.

@github-actions
Copy link

github-actions bot commented May 6, 2023

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐

@glenn-jocher
Copy link
Member

@achuntelolan you're welcome! If you have any more questions or need further assistance, feel free to ask. Happy coding! 😊

@achuntelolan
Copy link

Hi is Yolo v8 openvino model will work with openvio version 2022 and below and Python 3.6? because my current project is running on this configuration that's why, can anyone please help me?

@glenn-jocher
Copy link
Member

@achuntelolan hello! 👋 YOLOv8 models exported for OpenVINO should be compatible with OpenVINO 2022 versions and work fine with Python 3.6. However, it's always good to test with your specific setup. If you encounter any issues, make sure to have the latest version of the YOLOv8 repository and consider updating OpenVINO if possible. Here's a quick snippet on how to load and use the model:

from openvino.runtime import Core

core = Core()
model = core.read_model(model="path/to/your/model.xml")
compiled_model = core.compile_model(model=model, device_name="CPU")
# Now you're ready to make predictions with `compiled_model`!

If you run into any specific errors, feel free to share them here! 😊

@achuntelolan
Copy link

hii @glenn-jocher while i trying to load the openvino model of yolov8 I am getting error like this I am using open vino 2021 version:
Traceback (most recent call last):
File "yolo-old-v8.py", line 199, in
start_function()
File "/home/anoop/AnoopAJ/ML/new_code_workshop/YOLOV8/yolov4/lib/python3.8/site-packages/memory_profiler.py", line 1188, in wrapper
val = prof(func)(*args, **kwargs)
File "/home/anoop/AnoopAJ/ML/new_code_workshop/YOLOV8/yolov4/lib/python3.8/site-packages/memory_profiler.py", line 761, in f
return func(*args, **kwds)
File "yolo-old-v8.py", line 132, in start_function
yolo = predict(input_image_size,batch_size,thresh, common_path)
File "yolo-old-v8.py", line 16, in init
self.yolo = YOLO(inp_size=inp_size,
File "/home/anoop/AnoopAJ/ML/new_code_workshop/YOLOV8/yolov8.py", line 71, in init
net = ie.read_network(model=model_xml,weights=model_bin)
File "ie_api.pyx", line 368, in openvino.inference_engine.ie_api.IECore.read_network
File "ie_api.pyx", line 411, in openvino.inference_engine.ie_api.IECore.read_network
RuntimeError: Check 'false' failed at src/frontends/common/src/frontend.cpp:53:
Converting input model
Cannot create Interpolate layer /model.10/Resize id:164 from unsupported opset: opset11

@adrianboguszewski
Copy link
Contributor

@achuntelolan OV 2021 is a very old version. Any chance to update it to something newer? I think it may resolve your issue.

@achuntelolan
Copy link

Hii @adrianboguszewski updating to a newer version is a little difficult because I am running this project individually in 88 systems that are in different locations in India so updating the version is the last way before we try to resolve using the older version

@adrianboguszewski
Copy link
Contributor

Could you report your bug here: https://github.com/openvinotoolkit/openvino/issues? I think our developers will be able to help :)

@achuntelolan
Copy link

Hii when i trying to run yolo v8 on openvino version 2022.2.0 with python 3.6 getting this error:
Check 'false' failed at frontends/common/src/frontend.cpp:54:
Converting input model

@adrianboguszewski
Copy link
Contributor

Did you convert the model to OV with 2022.2 as well? The best way is to convert the model with the version you want to use for inference

@achuntelolan
Copy link

how i can ?

@adrianboguszewski
Copy link
Contributor

Have you tried this?

from ultralytics import YOLO

# Load a YOLOv8n PyTorch model
model = YOLO('yolov8n.pt')

# Export the model
model.export(format='openvino')  # creates 'yolov8n_openvino_model/'

# Load the exported OpenVINO model
ov_model = YOLO('yolov8n_openvino_model/')

# Run inference
results = ov_model('https://ultralytics.com/images/bus.jpg')

@achuntelolan
Copy link

This is just exporting the pytorch model to the openvino model right, this I have done I need how to convert to the openvino 2022 version supported model

@adrianboguszewski
Copy link
Contributor

You just need to run the code above, when you have OpenVINO 2022 installed.

@glenn-jocher
Copy link
Member

Absolutely! Once you have OpenVINO 2022 installed, running the code provided will export your YOLOv8 model in a format compatible with OpenVINO 2022. This ensures that the model utilizes the latest optimizations and features available in the newer version of OpenVINO. If you encounter any issues during the process, feel free to reach out! 😊

@achuntelolan
Copy link

achuntelolan commented May 20, 2024

ya ok, but I am using python3.6 with openvino 2022.2.0 version, here while I trying to install Ultralytics lib it gives the following error :

while I'm trying to install the ultralytics package using pip in the python3.6 version it's not getting installed its showing this error:
pip3 install ultralytics
ERROR: Could not find a version that satisfies the requirement ultralytics (from versions: none)
ERROR: No matching distribution found for ultralytics
and when I tried with python3.8 and openvinon 2021 version i saw the ultralytics package doing auto update for openvino:

Installing collected packages: openvino-telemetry, openvino
Attempting uninstall: openvino
Found existing installation: openvino 2021.4.2
Uninstalling openvino-2021.4.2:
Successfully uninstalled openvino-2021.4.2
Successfully installed openvino-2024.1.0 openvino-telemetry-2024.1.0

@adrianboguszewski
Copy link
Contributor

I believe you need to use a newer version of Ultralytics as well. Then newer OpenVINO will be installed.

@glenn-jocher
Copy link
Member

You're right! Upgrading to a newer version of Ultralytics can help ensure compatibility with the latest OpenVINO. You can update the Ultralytics package using pip:

pip install ultralytics --upgrade

This should also handle any necessary updates to OpenVINO. Let me know if this resolves the issue or if there's anything else I can assist you with! 😊

@achuntelolan
Copy link

@glenn-jocher Yaa but i want to work with old version of openvino

@glenn-jocher
Copy link
Member

Hi there! To work with an older version of OpenVINO, you can manually specify the version when installing OpenVINO. For example, if you want to install OpenVINO 2022.2.0, you can use:

pip install openvino==2022.2.0

Make sure your environment is compatible with the version you're installing. If you have any more questions or need further assistance, feel free to ask! 😊

@achuntelolan
Copy link

Hi I manually installed this version but after running the Python code to convert the Yolo v8 pytorch model to the openvino model, Ultralitics automatically updated the openvino version to the latest one

@adrianboguszewski
Copy link
Contributor

@achuntelolan what version of ultralytics do you use?

@achuntelolan
Copy link

latest versio

@adrianboguszewski
Copy link
Contributor

That's the problem. The Ultralytics package version is linked to the OpenVINO version. If you want to use older OpenVINO, you need to use older Ultralytics as well e.g. 8.0.128 which is linked to OV 2022.3 or higher. So:

  1. Downgrade Ultralytics to 8.0.128
  2. Downgrade OpenVINO to 2022.3
  3. Use the export function

@glenn-jocher
Copy link
Member

@adrianboguszewski absolutely! Downgrading both Ultralytics and OpenVINO to versions that are compatible with each other is a good approach. Here's a quick way to do it:

pip install ultralytics==8.0.128
pip install openvino==2022.3

Then, you can proceed with the export function as usual. Let me know if this helps or if you run into any other issues! 😊

@achuntelolan
Copy link

after this changes why iam getting this error :
File "/home/atai/Anoop/yolov8/lib/python3.8/site-packages/torch/serialization.py", line 1439, in find_class
return super().find_class(mod_name, name)
ModuleNotFoundError: No module named 'ultralytics.utils'

@glenn-jocher
Copy link
Member

Hi @achuntelolan,

It looks like the error is due to a missing module, likely because of version mismatches. Here’s a step-by-step to resolve it:

  1. Ensure Compatibility: Make sure you have compatible versions of Ultralytics and OpenVINO. For instance, Ultralytics 8.0.128 works well with OpenVINO 2022.3.

    pip install ultralytics==8.0.128
    pip install openvino==2022.3
  2. Reinstall Dependencies: Sometimes, a fresh installation can resolve unexpected issues.

    pip uninstall ultralytics openvino
    pip install ultralytics==8.0.128 openvino==2022.3
  3. Check Imports: Ensure your script imports the correct modules and paths.

If the issue persists, please share more details about your setup. Happy coding! 😊

@achuntelolan
Copy link

achuntelolan commented May 28, 2024

Yes, I'm using the same version that you gave and this is my script for exporting :

from ultralytics import YOLO


model = YOLO('best.pt')

# Export the model
model.export(format='openvino',imgsz=(640,384))
ov_model = YOLO('yolov8n_openvino_model/')

# Run inference
results = ov_model('https://ultralytics.com/images/bus.jpg')

@glenn-jocher
Copy link
Member

Hi @achuntelolan,

Thanks for sharing your script! It looks mostly correct, but there might be a small issue with the model path after export. Here’s a refined version:

from ultralytics import YOLO

# Load the YOLOv8 model
model = YOLO('best.pt')

# Export the model to OpenVINO format
model.export(format='openvino', imgsz=(640, 384))

# Load the exported OpenVINO model
ov_model = YOLO('best_openvino_model/')  # Ensure this matches the export directory

# Run inference
results = ov_model('https://ultralytics.com/images/bus.jpg')

Make sure the directory name (best_openvino_model/) matches the one created during export. If you still encounter issues, please let us know! 😊

@achuntelolan
Copy link

actually I am getting the error while exporting so I have a doubt I trained the model in ultralytics latest version and converting with the older version so I just retraining my model with Ultralytics 8.0.128 version and after trying to convert to know if that is any issues will be there, but anyway I am really happy to say that you guys giving a huge support for resolving the issues I didn't got this much support from even from my seniors also thanks @glenn-jocher for your valuable support

@glenn-jocher
Copy link
Member

Hi @achuntelolan,

Thank you for your kind words! 😊

Yes, training your model with Ultralytics 8.0.128 and then exporting it should resolve the compatibility issues. It's always best to keep the training and exporting environments consistent.

If you encounter any further issues or need additional assistance, feel free to reach out. We're here to help!

Best of luck with your retraining and exporting!

Warm

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

10 participants