Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Yolov5 to onnx for rk3588 #57

Open
sdpscnc opened this issue Jan 10, 2023 · 9 comments
Open

Yolov5 to onnx for rk3588 #57

sdpscnc opened this issue Jan 10, 2023 · 9 comments

Comments

@sdpscnc
Copy link

sdpscnc commented Jan 10, 2023

How to set parameters when Yolov5 is converted to onnx? I can use this onnx (rknpu2/examples/rknn_yolov5_demo/convert_rknn_demo/yolov5/onnx_models/yolov5s_rm_transpose.onnx) to convert to rknn and complete the reasoning on rk3588, but using my own conversion pt>onnx>rknn cannot complete the reasoning.

@how2flow
Copy link

clone this repo https://github.com/airockchip/yolov5
then,
$ python export.py --weights "your_own_model" --rknpu "RK3588" --include "onnx"

@Mitchelldscott
Copy link

Mitchelldscott commented Apr 17, 2023

I was following this solution to deploy a model on a Khadas edge2 running Ubuntu 20.04. It lead to another weird issue:

E RKNN: [06:04:55.824] failed to submit!, op id: 82, op name: OutputOperator:output, flags: 0x1, task start: 0, task number: 217, run task counter: 207, int status: 0

I noticed the output size of my model is not the same as the one in the demo. Is there more information on the fork linked above?

My model

post process config: box_conf_threshold = 0.50, nms_threshold = 0.60

Read data/img/bus.jpg ...

img width = 640, img height = 640

Loading mode...

sdk version: 1.3.0 (c193be371@2022-05-04T20:16:33) driver version: 0.7.2

model input num: 1, output num: 3

  index=0, name=images, n_dims=4, dims=[1, 640, 640, 3], n_elems=1228800, size=1228800, fmt=NHWC, type=INT8, qnt_type=AFFINE, zp=-128, scale=0.003922

  index=0, name=output, n_dims=4, dims=[1, 24, 80, 80], n_elems=153600, size=153600, fmt=NCHW, type=INT8, qnt_type=AFFINE, zp=-128, scale=0.003921

  index=1, name=272, n_dims=4, dims=[1, 24, 40, 40], n_elems=38400, size=38400, fmt=NCHW, type=INT8, qnt_type=AFFINE, zp=-128, scale=0.003920

  index=2, name=274, n_dims=4, dims=[1, 24, 20, 20], n_elems=9600, size=9600, fmt=NCHW, type=INT8, qnt_type=AFFINE, zp=-128, scale=0.003921

Demo Model

post process config: box_conf_threshold = 0.50, nms_threshold = 0.60

Read data/img/bus.jpg ...

img width = 640, img height = 640

Loading mode...

sdk version: 1.3.0 (c193be371@2022-05-04T20:16:33) driver version: 0.7.2

model input num: 1, output num: 3

  index=0, name=images, n_dims=4, dims=[1, 640, 640, 3], n_elems=1228800, size=1228800, fmt=NHWC, type=INT8, qnt_type=AFFINE, zp=-128, scale=0.003922

  index=0, name=output, n_dims=5, dims=[1, 3, 85, 80], n_elems=1632000, size=1632000, fmt=UNDEFINED, type=INT8, qnt_type=AFFINE, zp=77, scale=0.080445

  index=1, name=371, n_dims=5, dims=[1, 3, 85, 40], n_elems=408000, size=408000, fmt=UNDEFINED, type=INT8, qnt_type=AFFINE, zp=56, scale=0.080794

  index=2, name=390, n_dims=5, dims=[1, 3, 85, 20], n_elems=102000, size=102000, fmt=UNDEFINED, type=INT8, qnt_type=AFFINE, zp=69, scale=0.081305

UPDATE:
I was converting models with the newest commit while the Khadas examples are set up with an older commit 9ad79343fae625f4910242e370035fcbc40cc31a, it could be on my end but the onnx that comes with the demo does not work on this commit unless you specify the right outputs, which for my custom model are different from the demo (custom model works without specifying...). I'll try running the exported model with the newer sdk (hopefully fixes the failed to submit).

@sitnikov2020
Copy link

sitnikov2020 commented May 5, 2023

clone this repo https://github.com/airockchip/yolov5 then, $ python export.py --weights "your_own_model" --rknpu "RK3588" --include "onnx"

Good day
did you do this with official yolov5s or yolov5n?
it doesn't work:
python3 detect.py --weights yolov5s.onnx --img 640 --conf 0.25 --source data/images
many fake detections with yolov5s.onnx from rknn toolkit (it converts to rknn and works fine with rk 3588) http://joxi.ru/ZrJdkz0hwE1nD2
same - many fake detections with any other official yolov5, converted with your command (python3 export.py --weights yolov5n.pt --rknpu "RK3588" --include "onnx") to onnx
http://joxi.ru/bmo0JE9f3BMy12

But, these onnx models doesnt work with rk3588
http://joxi.ru/krDjRz0hKN0472 - same multiple fake detections with rknn model

So, how to achieve onnx model from any official model?

@cvetaevvitaliy
Copy link

clone this repo https://github.com/airockchip/yolov5 then, $ python export.py --weights "your_own_model" --rknpu "RK3588" --include "onnx"

Good day did you do this with official yolov5s or yolov5n? it doesn't work: python3 detect.py --weights yolov5s.onnx --img 640 --conf 0.25 --source data/images many fake detections with yolov5s.onnx from rknn toolkit (it converts to rknn and works fine with rk 3588) http://joxi.ru/ZrJdkz0hwE1nD2 same - many fake detections with any other official yolov5, converted with your command (python3 export.py --weights yolov5n.pt --rknpu "RK3588" --include "onnx") to onnx http://joxi.ru/bmo0JE9f3BMy12

But, these onnx models doesnt work with rk3588 http://joxi.ru/krDjRz0hKN0472 - same multiple fake detections with rknn model

So, how to achieve onnx model from any official model?

Hi!
Try this model 'yolov5s', it trained in three classes:
0 - xxx, 1 - car, 2 - person

Custom input - 1280x704 px

best_24-04-2023_rknn.zip

Screenshot from 2023-05-06 17-09-56

@how2flow
Copy link

how2flow commented May 8, 2023

But, these onnx models doesnt work with rk3588 http://joxi.ru/krDjRz0hKN0472 - same multiple fake detections with rknn model

So, how to achieve onnx model from any official model?

First, I'm using rk3568, but I don't think the mechanism is different.

Didn't the yolov5n*.rknn file appear as output of rknn-toolkit2? Have you tested it with that file?
I saw a similar log. and I don't know much about machine-learning.
I wasn't sure if the log was really fake detection or scanning related to the input source and data/dataset.

I don't know if it's fake detection as you said,, there was no problem when using the rknn.

@cvetaevvitaliy
Copy link

Of course the same.
I wrote instructions on how to convert a .pt file to .rknn
rockchip-linux/rknn-toolkit2#159 (comment)

@cvetaevvitaliy
Copy link

cvetaevvitaliy commented May 11, 2023

Example usage yolov5s, activation ReLU, input 1280x704

[ INFO ] RknnNpuInit() Line 47:Loading model...
[ INFO ] RknnNpuInit() Line 73:sdk version: 1.4.0 (a10f100eb@2022-09-09T09:07:14) driver version: 0.7.2
[ INFO ] RknnNpuInit() Line 83:Custom string: Model=yolov5s_ReLU epochs=300 date=11-05-2023-11h-50m-06s
[ INFO ] RknnNpuInit() Line 92:model input num: 1, output num: 3
[ INFO ] m_dump_tensor_attr() Line 326:  index=0, name=images, n_dims=4, dims=[1, 1280, 704, 3], n_elems=2703360, size=2703360, fmt=NHWC, type=INT8, qnt_type=AFFINE, zp=-128, scale=0.003922
[ INFO ] m_dump_tensor_attr() Line 326:  index=0, name=onnx::Reshape_272, n_dims=4, dims=[1, 24, 160, 88], n_elems=337920, size=337920, fmt=NCHW, type=INT8, qnt_type=AFFINE, zp=63, scale=0.149798
[ INFO ] m_dump_tensor_attr() Line 326:  index=1, name=onnx::Reshape_311, n_dims=4, dims=[1, 24, 80, 44], n_elems=84480, size=84480, fmt=NCHW, type=INT8, qnt_type=AFFINE, zp=54, scale=0.126033
[ INFO ] m_dump_tensor_attr() Line 326:  index=2, name=onnx::Reshape_350, n_dims=4, dims=[1, 24, 40, 22], n_elems=21120, size=21120, fmt=NCHW, type=INT8, qnt_type=AFFINE, zp=58, scale=0.124716
model is NHWC input fmt
[ INFO ] RknnNpuInit() Line 134:model input height=1280, width=704, channel=3
[ INFO ] PostprocessThread() Line 43:img width = 1920, img height = 1080
trim.6E74A0DC-52A3-4286-88F0-ED0A13BA143F.MOV

@plotnikovgp
Copy link

plotnikovgp commented Jun 15, 2023

Example usage yolov5s, activation ReLU, input 1280x704

[ INFO ] RknnNpuInit() Line 47:Loading model...
[ INFO ] RknnNpuInit() Line 73:sdk version: 1.4.0 (a10f100eb@2022-09-09T09:07:14) driver version: 0.7.2
[ INFO ] RknnNpuInit() Line 83:Custom string: Model=yolov5s_ReLU epochs=300 date=11-05-2023-11h-50m-06s
[ INFO ] RknnNpuInit() Line 92:model input num: 1, output num: 3
[ INFO ] m_dump_tensor_attr() Line 326:  index=0, name=images, n_dims=4, dims=[1, 1280, 704, 3], n_elems=2703360, size=2703360, fmt=NHWC, type=INT8, qnt_type=AFFINE, zp=-128, scale=0.003922
[ INFO ] m_dump_tensor_attr() Line 326:  index=0, name=onnx::Reshape_272, n_dims=4, dims=[1, 24, 160, 88], n_elems=337920, size=337920, fmt=NCHW, type=INT8, qnt_type=AFFINE, zp=63, scale=0.149798
[ INFO ] m_dump_tensor_attr() Line 326:  index=1, name=onnx::Reshape_311, n_dims=4, dims=[1, 24, 80, 44], n_elems=84480, size=84480, fmt=NCHW, type=INT8, qnt_type=AFFINE, zp=54, scale=0.126033
[ INFO ] m_dump_tensor_attr() Line 326:  index=2, name=onnx::Reshape_350, n_dims=4, dims=[1, 24, 40, 22], n_elems=21120, size=21120, fmt=NCHW, type=INT8, qnt_type=AFFINE, zp=58, scale=0.124716
model is NHWC input fmt
[ INFO ] RknnNpuInit() Line 134:model input height=1280, width=704, channel=3
[ INFO ] PostprocessThread() Line 43:img width = 1920, img height = 1080

trim.6E74A0DC-52A3-4286-88F0-ED0A13BA143F.MOV

Are you using yolov5 inference example from https://github.com/khadas/edge2-npu? I tried to use your model with edge2-npu yolov5 code example, but got an error "failed to submit!, op id: 102" :(
image

@cvetaevvitaliy
Copy link

Example usage yolov5s, activation ReLU, input 1280x704

[ INFO ] RknnNpuInit() Line 47:Loading model...

[ INFO ] RknnNpuInit() Line 73:sdk version: 1.4.0 (a10f100eb@2022-09-09T09:07:14) driver version: 0.7.2

[ INFO ] RknnNpuInit() Line 83:Custom string: Model=yolov5s_ReLU epochs=300 date=11-05-2023-11h-50m-06s

[ INFO ] RknnNpuInit() Line 92:model input num: 1, output num: 3

[ INFO ] m_dump_tensor_attr() Line 326: index=0, name=images, n_dims=4, dims=[1, 1280, 704, 3], n_elems=2703360, size=2703360, fmt=NHWC, type=INT8, qnt_type=AFFINE, zp=-128, scale=0.003922

[ INFO ] m_dump_tensor_attr() Line 326: index=0, name=onnx::Reshape_272, n_dims=4, dims=[1, 24, 160, 88], n_elems=337920, size=337920, fmt=NCHW, type=INT8, qnt_type=AFFINE, zp=63, scale=0.149798

[ INFO ] m_dump_tensor_attr() Line 326: index=1, name=onnx::Reshape_311, n_dims=4, dims=[1, 24, 80, 44], n_elems=84480, size=84480, fmt=NCHW, type=INT8, qnt_type=AFFINE, zp=54, scale=0.126033

[ INFO ] m_dump_tensor_attr() Line 326: index=2, name=onnx::Reshape_350, n_dims=4, dims=[1, 24, 40, 22], n_elems=21120, size=21120, fmt=NCHW, type=INT8, qnt_type=AFFINE, zp=58, scale=0.124716

model is NHWC input fmt

[ INFO ] RknnNpuInit() Line 134:model input height=1280, width=704, channel=3

[ INFO ] PostprocessThread() Line 43:img width = 1920, img height = 1080

trim.6E74A0DC-52A3-4286-88F0-ED0A13BA143F.MOV

Are you using yolov5 inference example from https://github.com/khadas/edge2-npu? I tried to use your model with edge2-npu yolov5 code example, but got an error "failed to submit!, op id: 102" :(

image

you need to study that example first and understand how it works.
P.S. you don't have the right number of labels in your hard-coded path to label-lists.txt

good luck

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants