Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to use onnx-model? #1163

Closed
zys1994 opened this issue May 12, 2020 · 14 comments
Closed

how to use onnx-model? #1163

zys1994 opened this issue May 12, 2020 · 14 comments

Comments

@zys1994
Copy link

zys1994 commented May 12, 2020

i convert pytorch to onnx,and then convert onnx to openvino.
i get 10674x4 vec ,10674x2 vec from model.
i wonder how to use 10674x4 vec for box, 10674x2 vec for class. some info i print let me confused

bbox:0.634615--0.923077--807534--9.26084e-06
cls:4.20465e-14--2.05051e-07
bbox:0.653846--0.923077--874751--7.72753e-05
cls:4.20465e-14--2.05052e-07
bbox:0.673077--0.923077--47167--0.000339734
cls:4.20465e-14--2.05048e-07
bbox:0.692308--0.923077--12231.7--0.00127464
cls:7.30383e-12--1.35157e-07
bbox:0.711538--0.923077--7639.39--0.0022885
cls:5.61354e-08--4.50119e-11
bbox:0.730769--0.923077--17385.6--0.00283923
cls:2.03106e-07--9.77846e-14
bbox:0.75--0.923077--35675--0.003352
cls:2.05048e-07--4.20465e-14
bbox:0.769231--0.923077--60815.3--0.0033422
cls:2.05052e-07--4.20465e-14
bbox:0.788462--0.923077--117217--0.00307179
cls:2.05052e-07--4.20465e-14
bbox:0.807692--0.923077--167417--0.00253383
cls:2.05052e-07--4.20465e-14
bbox:0.826923--0.923077--276560--0.0016467
cls:2.05052e-07--4.20465e-14
bbox:0.846154--0.923077--409196--0.00100956
cls:2.05052e-07--4.20465e-14
bbox:0.865385--0.923077--409405--0.000601612
cls:2.05052e-07--4.20465e-14
bbox:0.884615--0.923077--293693--0.000481688
cls:2.05052e-07--4.20465e-14
bbox:0.903846--0.923077--92303.4--0.000680861
cls:2.05052e-07--4.20465e-14
bbox:0.923077--0.923077--36983--0.00138854
cls:2.05052e-07--4.20465e-14
bbox:0.942308--0.923077--712.672--0.00725335
cls:2.05052e-07--4.20465e-14
bbox:0.961538--0.923077--31.127--0.0166295
cls:2.05052e-07--4.20465e-14
bbox:0.980769--0.923081--1.85301--0.0496818
cls:1.0852e-07--2.29967e-09
bbox:0.0192308--0.942308--0.23739--0.0449858
cls:4.55672e-08--2.12518e-08
bbox:0.038461--0.942308--1.24798--0.0602913
cls:9.65756e-09--5.2243e-08
bbox:0.0441648--0.942308--27.0746--0.0199436
cls:1.22537e-09--8.54542e-08
bbox:0.0576992--0.942308--839.719--0.00555218
cls:2.60102e-09--5.29671e-09
bbox:0.0769268--0.942308--2744.91--0.00289175
cls:1.23696e-07--1.30158e-11
bbox:0.0961594--0.942308--17416.5--0.00197398
cls:2.05026e-07--4.20465e-14
bbox:0.115385--0.942308--51120.5--0.00142003
cls:2.05052e-07--4.20465e-14
bbox:0.134615--0.942308--167904--0.000995084
cls:2.05052e-07--4.20465e-14
bbox:0.153846--0.942308--257190--0.000832829
cls:2.05052e-07--4.20465e-14
bbox:0.173077--0.942308--577587--0.000848024
cls:2.05052e-07--4.20465e-14
bbox:0.192308--0.942308--874012--0.000821955
cls:2.05052e-07--4.20465e-14
bbox:0.211538--0.942308--1.01137e+06--0.000904757
cls:2.05052e-07--4.20465e-14
bbox:0.230769--0.942308--995913--0.000895852
cls:2.05052e-07--4.20465e-14
bbox:0.25--0.942308--935340--0.000966216
cls:2.05052e-07--4.20465e-14
bbox:0.269231--0.942308--855353--0.00101212
cls:2.05052e-07--4.20465e-14
bbox:0.288462--0.942308--667888--0.00104062
cls:2.05052e-07--4.20465e-14
bbox:0.307692--0.942308--540058--0.000876283
cls:2.05052e-07--4.20465e-14
bbox:0.326923--0.942308--287921--0.000745657
cls:2.05052e-07--4.20465e-14
bbox:0.346154--0.942308--187103--0.00057384
cls:2.05052e-07--4.20465e-14
bbox:0.365385--0.942308--111709--0.000459447
cls:2.05052e-07--4.20465e-14
bbox:0.384615--0.942308--68537.1--0.00046311
cls:2.05052e-07--4.20465e-14
bbox:0.403846--0.942308--28160.4--0.00055768
cls:2.05052e-07--4.20465e-14
bbox:0.423077--0.942308--13263.1--0.00102969
cls:2.05052e-07--4.20465e-14
bbox:0.442308--0.942308--6686.38--0.00218977
cls:2.05052e-07--4.20465e-14
bbox:0.461538--0.942308--4102.48--0.00448606
cls:2.05052e-07--4.20465e-14
bbox:0.480769--0.942308--3064.76--0.00733868
cls:2.05028e-07--4.20465e-14
bbox:0.5--0.942308--2347.71--0.00427697
cls:1.98873e-07--4.20465e-14
bbox:0.519231--0.942308--11119.2--0.000573927
cls:1.45829e-09--2.90993e-11
bbox:0.538462--0.942308--9013.9--4.20146e-05
cls:5.745e-10--3.36227e-11
bbox:0.557692--0.942308--12398--5.10004e-06
cls:7.1592e-11--2.39425e-10
bbox:0.576923--0.942308--30033.4--1.28081e-06
cls:2.45911e-11--6.15646e-10
bbox:0.596154--0.942308--7561.16--4.68714e-06
cls:1.36877e-12--4.20382e-09
bbox:0.615385--0.942308--29801.8--7.60678e-06
cls:2.42828e-13--1.26419e-07
bbox:0.634615--0.942308--30578.8--9.43344e-05
cls:4.20465e-14--2.02372e-07
bbox:0.653846--0.942308--16299.3--0.000244336
cls:7.2284e-14--2.03175e-07
bbox:0.673077--0.942308--2957.58--0.000764852
cls:8.95615e-12--1.6398e-07
bbox:0.692308--0.942308--1378.28--0.00125186
cls:2.02707e-08--2.96338e-10
bbox:0.711539--0.942308--2770.29--0.0016186
cls:1.90063e-07--1.32676e-12
bbox:0.730769--0.942308--5116.82--0.00182974
cls:2.04782e-07--4.20465e-14
bbox:0.75--0.942308--9188.31--0.00202846
cls:2.05051e-07--4.20465e-14
bbox:0.769231--0.942308--12484.7--0.0020559
cls:2.05052e-07--4.20465e-14
bbox:0.788462--0.942308--22390.4--0.00183388
cls:2.05052e-07--4.20465e-14
@github-actions
Copy link

github-actions bot commented May 12, 2020

Hello @zys1994, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Google Colab Notebook, Docker Image, and GCP Quickstart Guide for example environments.

If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

@zys1994
Copy link
Author

zys1994 commented May 12, 2020

2020-05-12-14-28-58
upload pic

@zys1994
Copy link
Author

zys1994 commented May 12, 2020

int pblen =static_cast<int>(bptr[0]->getTensorDesc().getDims()[0]);
const float *output_blob_b = bptr[0]->buffer().as<PrecisionTrait<Precision::FP32>::value_type *>();
    const float *output_blob_c = bptr[1]->buffer().as<PrecisionTrait<Precision::FP32>::value_type *>();

    for (int i=0;i<pblen;++i){
            std::cout << "bbox:" << output_blob_b[4 * i] << "--" << output_blob_b[4 * i + 1] << "--"
                      << output_blob_b[4 * i + 2] << "--" << output_blob_b[4 * i + 3] << std::endl;
            std::cout << "cls:" << output_blob_c[2 * i] << "--" << output_blob_c[2 * i + 1] << std::endl;
        }
    }

the result i print i can't understand.

@zys1994
Copy link
Author

zys1994 commented May 12, 2020

i have succeed using in openvino,thanks

@zys1994 zys1994 closed this as completed May 12, 2020
@vandesa003
Copy link

Hi @zys1994 , I met the same issue, could you share with us how to do the post-processing after onnx ouput?

@BackT0TheFuture
Copy link

@zys1994
same problem, would you like to share you code ?
thx!

@xpngzhng
Copy link

@goodtogood @vandesa003
have you solved the problem?
actually,it is quite simple
in ONNX export mode, the output x y w h are normalized, the same as the yolo annotation format
you only need to rescale x w with image width and y h with image height

@vandesa003
Copy link

@goodtogood @vandesa003
have you solved the problem?
actually,it is quite simple
in ONNX export mode, the output x y w h are normalized, the same as the yolo annotation format
you only need to rescale x w with image width and y h with image height

@xpngzhng thanks! I have solved the normalization problem and here I provided my code: #1172 (comment)

@sky-fly97
Copy link

@good @vandesa 003你解决问题了吗?实际上,这很简单在Onnx导出模式中,输出x y w h与yolo注释格式相同。你只需要用图像宽度和图像高度来重新计算x_w和y_h。

@xpngzhng谢谢!我已经解决了规范化问题,在这里我提供了我的代码:#1172(评论)

Hello, I successfully exported the onnx model using detect.py, but an error occurred when using his prediction. Can you share how you called the onnx model prediction code? thanks!

@zjd1988
Copy link

zjd1988 commented Jul 1, 2020

hi @vandesa003 , I have modified models.py as you say,boxes vector only x y are same, but w h are different.
onnxruntime result
企业微信截图_15935974038073

pytorch result
企业微信截图_15935974604716

@xpngzhng
Copy link

xpngzhng commented Jul 2, 2020

here is part of my code using the onnx model for inference, by further converting it to openvino model

I did not modify ultirlytics' code when converting model to onnx, except the input image size and the opset should be 10 not 11

In inference, we need to scale the openvino box output x1x2y1y2 by the image width and height
notice that my model has only one class, the output is a little different from the model that have more than one class

import math
import os
import sys
import time

import cv2
import numpy as np
from openvino.inference_engine import IECore

class InferContext(object):
    def __init__(self, model, weights, device_name):
        self.ie = IECore()
        self.net = self.ie.read_network(model=model, weights=weights)
        self.exec_net = self.ie.load_network(network=self.net, device_name=device_name)
        self.input_blob_name = next(iter(self.net.inputs))

    def infer(self, input):
        return self.exec_net.infer(inputs={self.input_blob_name: input})


class YoloV3DetContext(object):
    def __init__(self, model, weights, device_name, width, height, conf_thres=0.3, iou_thres=0.6):
        self.context = InferContext(model=model, weights=weights, device_name=device_name)
        self.width = width
        self.height = height
        self.conf_thres = conf_thres
        self.iou_thres = iou_thres

    @staticmethod
    def letterbox(img, new_shape=(416, 416), color=(127, 127, 127)):
        pass

    @staticmethod
    def xywh2xyxy(x):
        pass

    @staticmethod
    def compute_iou(rect, rest):
        pass

    @staticmethod
    def non_max_suppression(boxes, confs, conf_thres=0.3, iou_thres=0.6):
        pass

    @staticmethod
    def scale_coords(img1_shape, coords, img0_shape):
        pass

    @staticmethod
    def clip_coords(boxes, img_shape):
       pass

    def infer(self, image):
        img_reshape = YoloV3DetContext.letterbox(image, new_shape=(self.height, self.width))[0]
        img = img_reshape[:, :, ::-1].transpose(2, 0, 1)  # BGR to RGB, to 3x416x416
        img = np.ascontiguousarray(img)
        img = img.astype(dtype=np.float32)
        img /= 255.0  # 0 - 255 to 0.0 - 1.0
        img = np.expand_dims(img, axis=0)

        res = self.context.infer(img)

        confs = res['Concat_129']
        boxes = res['Concat_132']
        boxes[:, 0] *= self.width
        boxes[:, 1] *= self.height
        boxes[:, 2] *= self.width
        boxes[:, 3] *= self.height
        boxes, confs = self.non_max_suppression(boxes, confs, self.conf_thres, self.iou_thres)

        img1_shape = img_reshape.shape[:2]
        img0_shape = image.shape[:2]
        boxes = self.scale_coords(img1_shape, boxes, img0_shape)
        # print(boxes)

        return boxes.astype(dtype=np.int32), confs

@dengfenglai321
Copy link

boxes, confs = self.non_max_suppression(boxes, confs, self.conf_thres, self.iou_thres)

hi, could you share your nms code?

    @staticmethod
    def non_max_suppression(boxes, confs, conf_thres=0.3, iou_thres=0.6):
        pass

thanks a lot!!

@zjd1988
Copy link

zjd1988 commented Sep 16, 2020

@cendelian you can refer to my python implement.
https://github.com/zjd1988/tensorrt_wrapper_for_onnx/blob/master/python_scripts/execution.py
class YoloNMSExecution

@glenn-jocher
Copy link
Member

@zjd1988 hello everyone! It's great to see the community actively engaging and helping each other out with YOLOv3 and ONNX models. 😊

For those looking for NMS (Non-Maximum Suppression) code, it's a crucial step in object detection to ensure that you only get the best bounding box for each detected object. The NMS function typically takes the bounding boxes and their corresponding confidence scores, filters out boxes with a confidence below a threshold, and then selects the best bounding boxes while suppressing the non-maximal ones based on the IoU (Intersection over Union) threshold.

While I can't provide a direct code snippet here, I encourage you to check out the Ultralytics documentation for guidance on post-processing steps, including NMS. You can find detailed explanations and examples that should help you implement NMS correctly in your pipeline.

Keep up the great collaboration, and if you have further questions or run into issues, feel free to reach out! 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants