Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Yolo Performance Drop When Using OV2022.1 #17044

Closed
HonzaCuhel opened this issue Apr 19, 2023 · 11 comments
Closed

[Bug] Yolo Performance Drop When Using OV2022.1 #17044

HonzaCuhel opened this issue Apr 19, 2023 · 11 comments
Assignees
Labels

Comments

@HonzaCuhel
Copy link

Hello,

we are experiencing a drop in performance with the converted YoloV7 using our tools.luxonis.com since we switched to using OpenVINO in version 2022.1.0 (before we were using version 2021.4.2). Initially, we were experiencing performance drop also with YoloV6 R3 and YoloV8 models, but after switching to exporting models to IR format with --use_legacy_frontend flag, the performance for YoloV6 R3 and YoloV8 was comparable as before.

These are the measured fps that we measured with the generated .blob files:

Model name OpenVINO 2021.4.2 FPS OpenVINO 2022.1.0 FPS OpenVINO 2022.1.0 FPS using --use_legacy_frontend
YoloV6n R3 50.45 ± 0.26 fps 47.44 ± 0.56 fps 50.31 ± 0.17 fps
YoloV7-tiny 47.53 ± 1.15 fps 41.05 ± 0.24 fps 41.22 ± 0.16 fps
YoloV8-tiny 31.30 ± 0.22 fps 29.13 ± 0.03 fps 31.20 ± 0.11 fps

We investigated the exported .xml files, looked at the operations and their total count and found out that models exported with version 2022.1.0 use less unique operations, but the total number of all ops is greater. Here is a link to the table containing all the findings.

The models were exported with these commands:

  • YoloV6
    mo --input_model yolov6nr3-simplified.onnx --output_dir "output/" --model_name yolov6nr3 --data_type FP16 --reverse_input_channels --scale 255 --output "output1_yolov6r2,output2_yolov6r2,output3_yolov6r2"
  • YoloV7
    mo --input_model yolov7t-simplified.onnx --output_dir "output/" --model_name yolov7t --data_type FP16 --reverse_input_channels --scale 255 --output "output1_yolov7,output2_yolov7,output3_yolov7"
  • YoloV8
    mo --input_model yolov8n-simplified.onnx --output_dir "output_yolov8n/" --model_name yolov8n --data_type FP16 --reverse_input_channels --scale 255 --output "output1_yolov6r2,output2_yolov6r2,output3_yolov6r2"

Here are the model files.

System information (version)
  • OpenVINO => 2022.1
  • Operating System / Platform => Linux
  • Problem classification: Model Conversion
Our questions
  1. Do you know what could be the cause of the performance drop when using OpenVINO in version 2022.1.0? Because not all model performances were effected, e.g. YoloV5 and YoloV6 R2 models were not.
  2. Do you know why even after using the --use_legacy_frontend flag, the performance of YoloV7 is still worse?
  3. Why are there so many additional Convert layers in exported models using OpenVINO 2022.1.0?
    image

Thank you very much!

Best
Jan

@tomdol
Copy link
Contributor

tomdol commented Apr 19, 2023

Hi Jan, thanks for reporting the issue. Please give us some time to investigate. In the meantime could you please share your HW configuration used for the tests?

@HonzaCuhel
Copy link
Author

Hi @tomdol ,
thank you very much. I totally understand. The FPS were measured on OAK-1 camera connected to a laptop running Ubuntu 22.04.1 LTS with 32GB RAM and Intel® Core™ i5-8300H CPU @ 2.30GHz × 8.
Kind regards,
Jan

@mbencer
Copy link
Contributor

mbencer commented Apr 27, 2023

@HonzaCuhel Thank you for great description! Since #7588 we are representing fp16 constants in such a way so it is expected (that shouldn't affect performance, because those Converts will be removed by plugins). Note that --data_type is deprecated now and we should use --compress_to_fp16. The analyzing root cause of performance degradation is in progress.
Could you provide the code how fps was measured?

@mbencer
Copy link
Contributor

mbencer commented Apr 28, 2023

I've measured provided yolo models using benchmark_app (you can find it in openvino/bin/intel64/Release after build). The results are from our dev machine, so don't treat it as any official numbers (but the proportions should be kept).
An example command to inference the model:
./benchmark_app -m ~/models/ModelData/yolov8n_OV2021.4.2/new/yolov8n.xml
Note that by default the results are averaged from 12 inferences.

Your IR inferenced on 21.4.2 version [FPS] Your IR inferenced on the current master [FPS] IR generated on the current master inferenced on the current master [FPS]
YoloV6 793.51 925.3 955.6
YoloV7 687.2 698.04 708.5
YoloV8 841 (here sth to verify) 781.98 823.93

I have a few questions/tips:

  1. Do you have some limitations to use the newest version of OpenVINO?
  2. Could you please try to check the results using benchmark_app in your environment?
  3. Could you try to use the newest version of OpenVINO in your application? (even if we find some problems, it will be rather applied to the master).

Thanks,
Mateusz

@HonzaCuhel
Copy link
Author

@mbencer Thank you very much for your answer! FPS was measured using our custom version inspired by the benchmark_app, it should give the same results.

  1. Our current blobconverter that converts IR representation into a .blob file supports only up to OpenVINO version 2022.1
  2. Yes, will check it.
  3. Will try.

Best,
Jan

@avitial avitial removed the bug Something isn't working label May 2, 2023
@mbencer
Copy link
Contributor

mbencer commented May 4, 2023

@HonzaCuhel Could you elaborate more about this blob conversion? Do you mean exporting compiled model into binary representation like compile tool ?

@HonzaCuhel
Copy link
Author

Hi @mbencer,

I apologize for my delayed response.

I've measured provided yolo models using benchmark_app from master and these are the results:

Model name Performance of IR 21.4.2 [FPS] Performance of IR 22.1 [FPS]
YoloV6n R3 47.13 45.72
YoloV7t 39.07 29.46
YoloV8n 37.54 33.85

I've measure it on my laptop with following specification:

  • OS: Ubuntu 22.04.1 LTS (64 bit)
  • CPU: Intel® Core™ i5-8300H CPU @ 2.30GHz × 8
  • RAM: 32GB
  • GPU: Mesa Intel® UHD Graphics 630 (CFL GT2)

Could you elaborate more about this blob conversion? Do you mean exporting compiled model into binary representation like compile tool ?

Yes, something like that. We have our own tool. [link][repository]

Best,
Jan

@andrei-kochin
Copy link
Contributor

Hello @HonzaCuhel ,

Could you please try the 2023.1 latest pre-release to check if issue is still visible for you?

@andrei-kochin andrei-kochin assigned gkrivor and unassigned mbencer and tomdol Aug 4, 2023
@HonzaCuhel
Copy link
Author

Hi @andrei-kochin ,

I'll try it.

Best,
Jan

@HonzaCuhel
Copy link
Author

I just wanna ask you, will the MX be supported in 2023.1?

@avitial
Copy link
Contributor

avitial commented Dec 21, 2023

The OpenVINO 2022.3.2 LTS supports MyriadX (Intel® Neural Compute Stick 2 and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs (HDDL)), check the LTS release notes for more information. OpenVINO v2023.1 does not support MX devices.

Closing this, I hope previous responses were sufficient to help you proceed. Feel free to reopen to ask any questions related to this topic.

@avitial avitial closed this as completed Dec 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

7 participants