Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

any support for --grid parameter while exporting .onnx model? #12

Closed
akashAD98 opened this issue Dec 30, 2022 · 24 comments
Closed

any support for --grid parameter while exporting .onnx model? #12

akashAD98 opened this issue Dec 30, 2022 · 24 comments

Comments

@akashAD98
Copy link

thanks for repo. i was able to implement yolov7 repo on my system , its possible to implement it with --grid parameter? what changes i need to do? im getting size not match issue with --grid in your yolo.py script

@OpenVINO-dev-contest
Copy link
Owner

hi @akashAD98 We have this notebook to demonstrate the post-processing with --grid parameter
https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/226-yolov7-optimization/226-yolov7-optimization.ipynb

@OpenVINO-dev-contest
Copy link
Owner

Which means you can feed the model's single output directly into nms module without concatenating 3 of them.

@akashAD98
Copy link
Author

akashAD98 commented Dec 30, 2022

@OpenVINO-dev-contest Yes i tried that repo but im facing issues for inference on video/webcams? So I want to use your code for webcams/video inference

@akashAD98
Copy link
Author

without grid im getting no detection here
image

@akashAD98
Copy link
Author

also I'm not able to read my custom_model ,which has without --grid parameter

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-13-db161e3ad74f> in <module>
      2 core = Core()
      3 # read converted model
----> 4 model = core.read_model('model/best_veh_withbgnew.xml')
      5 # load model on CPU device
      6 compiled_model = core.compile_model(model, 'CPU')

RuntimeError: Check 'false' failed at C:\Jenkins\workspace\private-ci\ie\build-windows-vs2019\b\repos\openvino\src\frontends\common\src\frontend.cpp:54:
Converting input model
Incorrect weights in bin file!


@OpenVINO-dev-contest
Copy link
Owner

also I'm not able to read my custom_model ,which has without --grid parameter

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-13-db161e3ad74f> in <module>
      2 core = Core()
      3 # read converted model
----> 4 model = core.read_model('model/best_veh_withbgnew.xml')
      5 # load model on CPU device
      6 compiled_model = core.compile_model(model, 'CPU')

RuntimeError: Check 'false' failed at C:\Jenkins\workspace\private-ci\ie\build-windows-vs2019\b\repos\openvino\src\frontends\common\src\frontend.cpp:54:
Converting input model
Incorrect weights in bin file!

Did you get the model from model optimizer. and error happened during model offline converting ?

@OpenVINO-dev-contest
Copy link
Owner

without grid im getting no detection here image

Yes this notebook is only for grid model

@akashAD98
Copy link
Author

also I'm not able to read my custom_model ,which has without --grid parameter

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-13-db161e3ad74f> in <module>
      2 core = Core()
      3 # read converted model
----> 4 model = core.read_model('model/best_veh_withbgnew.xml')
      5 # load model on CPU device
      6 compiled_model = core.compile_model(model, 'CPU')

RuntimeError: Check 'false' failed at C:\Jenkins\workspace\private-ci\ie\build-windows-vs2019\b\repos\openvino\src\frontends\common\src\frontend.cpp:54:
Converting input model
Incorrect weights in bin file!

Did you get the model from model optimizer. and error happened during model offline converting ?

Im using


from openvino.tools import mo
from openvino.runtime import serialize

model = mo.convert_model('model/best_veh_withbgnew.onnx')
# serialize model for saving IR
serialize(model, 'model/best_veh_withbgnew.xml')

for conversion, .XML file is stored in system ,but not able to read it

@akashAD98
Copy link
Author

without grid im getting no detection here image

Yes this notebook is only for grid model

1.can you please give some suggestions regarding how can I use --grid in your yolov.py code?
2.in order to convert .XML into int8 format using NNCF ,should I need to pass data? for custom model what kind of data format I need to give? & it requires data in annotation format?(yolo format can I directly use (images & .txt format))

@OpenVINO-dev-contest
Copy link
Owner

Hi @akashAD98 the code is updated, you can trigger the grid model by add parameter --grid True for inference script

@OpenVINO-dev-contest
Copy link
Owner

without grid im getting no detection here image

Yes this notebook is only for grid model

1.can you please give some suggestions regarding how can I use --grid in your yolov.py code? 2.in order to convert .XML into int8 format using NNCF ,should I need to pass data? for custom model what kind of data format I need to give? & it requires data in annotation format?(yolo format can I directly use (images & .txt for

also I'm not able to read my custom_model ,which has without --grid parameter

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-13-db161e3ad74f> in <module>
      2 core = Core()
      3 # read converted model
----> 4 model = core.read_model('model/best_veh_withbgnew.xml')
      5 # load model on CPU device
      6 compiled_model = core.compile_model(model, 'CPU')

RuntimeError: Check 'false' failed at C:\Jenkins\workspace\private-ci\ie\build-windows-vs2019\b\repos\openvino\src\frontends\common\src\frontend.cpp:54:
Converting input model
Incorrect weights in bin file!

Did you get the model from model optimizer. and error happened during model offline converting ?

Im using


from openvino.tools import mo
from openvino.runtime import serialize

model = mo.convert_model('model/best_veh_withbgnew.onnx')
# serialize model for saving IR
serialize(model, 'model/best_veh_withbgnew.xml')

for conversion, .XML file is stored in system ,but not able to read it

Did you got a .bin file in the same fold with .xml file ?

@akashAD98
Copy link
Author

@OpenVINO-dev-contest yes i got .bin & .xml file in folder

@akashAD98
Copy link
Author

akashAD98 commented Dec 30, 2022

@OpenVINO-dev-contest issue has been solved, i restarted my system & it solved issue.

i have another question for NNCF Post-training Quantization ,for custom model what format data I need to pass? should I keep val2017 coco data or my own data ?

my goal is to convert into int8 format & without data I think its not possible

quantized_model = nncf.quantize(model, quantization_dataset, preset=nncf.QuantizationPreset.MIXED)

serialize(quantized_model, 'model/yolov7-tiny_int8.xml')

@OpenVINO-dev-contest
Copy link
Owner

@OpenVINO-dev-contest issue has been solved, i restarted my system & it solved issue.

i have another question for NNCF Post-training Quantization ,for custom model what format data I need to pass? should I keep val2017 coco data or my own data ?

my goal is to convert into int8 format & without data I think its not possible

quantized_model = nncf.quantize(model, quantization_dataset, preset=nncf.QuantizationPreset.MIXED)

serialize(quantized_model, 'model/yolov7-tiny_int8.xml')

You should define your dataloader and preprocessing firstly. In notebook example, we use COCO format.

@bbartling
Copy link

bbartling commented Dec 30, 2022 via email

@akashAD98
Copy link
Author

@bbartling i was able to run both ob window & ubantu linux system

@bbartling
Copy link

bbartling commented Jan 2, 2023 via email

@akashAD98
Copy link
Author

size in term of wh or model memory size? its 640640 & yolov7-tiny.onnx & yolov7-tiny.bin is 24mb

@superkido511
Copy link

Hi @akashAD98 the code is updated, you can trigger the grid model by add parameter --grid True for inference script

Hello, could also you add the grid option for C++ script?

@OpenVINO-dev-contest
Copy link
Owner

Hi @akashAD98 the code is updated, you can trigger the grid model by add parameter --grid True for inference script

Hello, could also you add the grid option for C++ script?

updated, you can add a true after the cpp running command

@superkido511
Copy link

Thank you so much!

Hi @akashAD98 the code is updated, you can trigger the grid model by add parameter --grid True for inference script

Hello, could also you add the grid option for C++ script?

updated, you can add a true after the cpp running command

@superkido511
Copy link

Hi @akashAD98 the code is updated, you can trigger the grid model by add parameter --grid True for inference script

Hello, could also you add the grid option for C++ script?

updated, you can add a true after the cpp running command

Just one more question, what's total_num = 25200 means? My custom model only have 10 classes instead of 85. I think I also need to change this number along with 85 to 15 in the code?

@OpenVINO-dev-contest
Copy link
Owner

Hi @akashAD98 the code is updated, you can trigger the grid model by add parameter --grid True for inference script

Hello, could also you add the grid option for C++ script?

updated, you can add a true after the cpp running command

Just one more question, what's total_num = 25200 means? My custom model only have 10 classes instead of 85. I think I also need to change this number along with 85 to 15 in the code?

25200 is the maximum number of objects model can detect, and yes should switch 85 to 15. pls ensure your model output shape is like [1, 25200, 15] before you change the code.

@superkido511
Copy link

Hi @akashAD98 the code is updated, you can trigger the grid model by add parameter --grid True for inference script

Hello, could also you add the grid option for C++ script?

updated, you can add a true after the cpp running command

Just one more question, what's total_num = 25200 means? My custom model only have 10 classes instead of 85. I think I also need to change this number along with 85 to 15 in the code?

25200 is the maximum number of objects model can detect, and yes should switch 85 to 15. pls ensure your model output shape is like [1, 25200, 15] before you change the code.

I got it. Thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants