You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I had started working on YOLOX Inference using openCV's Deep Neural Network framework (cv.dnn). I began testing various models from the YOLOX repo including the legacy models.
For the project, we made use of the YOLOX_S, YOLOX_tiny, and YOLOX_nano version(s) of YOLOX model since it offers the following advantages over other models.
Version
Resolution
mAPval (0.5:0.95)
FLOPS
Params
Model Size
YOLOX_S
640*640
40.5
9.0
26.8
66.7 MB FP16
YOLOX_Tiny
416*416
25.3
0.91
1.08
40.8 MB FP16
YOLOX_Nano
416*416
32.8
5.06
6.45
7.7 MB FP16
YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and industrial communities. We present some experienced improvements to YOLO series, forming a new high-performance detector -- YOLOX. We switch the YOLO detector to an anchor-free manner and conduct other advanced detection techniques, i.e., a decoupled head and the leading label assignment strategy SimOTA to achieve state-of-the-art results across a large scale range of models: For YOLO-Nano with only 0.91M parameters and 1.08G FLOPs, we get 25.3% AP on COCO, surpassing NanoDet by 1.8% AP; for YOLOv3, one of the most widely used detectors in industry, we boost it to 47.3% AP on COCO, outperforming the current best practice by 3.0% AP; for YOLOX-L with roughly the same amount of parameters as YOLOv4-CSP, YOLOv5-L, we achieve 50.0% AP on COCO at a speed of 68.9 FPS on Tesla V100, exceeding YOLOv5-L by 1.8% AP. F
WEEK3 TASKS
Demonstrated cv.dnn inference of YoloX on test images from val2017 COCO dataset.work
Run benchmark of the model on COCO val2017 dataset and report the scores for Average Precision (AP) and Average Recall (AR), the results observed are shared below.
YOLOX_S:
Average forward time: 5.53 ms, Average NMS time: 1.71 ms, Average inference time: 7.25 ms
Average Precision
Average Recall
area
IoU
Average Precision(AP)
all
0.50:0.95
0.405
all
0.50
0.593
all
0.75
0.437
small
0.50:0.95
0.232
medium
0.50:0.95
0.448
large
0.50:0.95
0.541
area
IoU
Average Recall(AR)
all
0.50:0.95
0.326
all
0.50:0.95
0.531
all
0.50:0.95
0.574
small
0.50:0.95
0.365
medium
0.50:0.95
0.634
large
0.50:0.95
0.724
YOLOX_tiny:
Average forward time: 2.07 ms, Average NMS time: 1.71 ms, Average inference time: 3.79 ms
Average Precision
Average Recall
area
IoU
Average Precision(AP)
all
0.50:0.95
0.328
all
0.50
0.504
all
0.75
0.346
small
0.50:0.95
0.139
medium
0.50:0.95
0.360
large
0.50:0.95
0.501
area
IoU
Average Recall(AR)
all
0.50:0.95
0.283
all
0.50:0.95
0.450
all
0.50:0.95
0.485
small
0.50:0.95
0.226
medium
0.50:0.95
0.550
large
0.50:0.95
0.687
YOLOX_nano:
Average forward time: 1.68 ms, Average NMS time: 1.64 ms, Average inference time: 3.31 ms
Average Precision
Average Recall
area
IoU
Average Precision(AP)
all
0.50:0.95
0.258
all
0.50
0.414
all
0.75
0.268
small
0.50:0.95
0.082
medium
0.50:0.95
0.275
large
0.50:0.95
0.410
area
IoU
Average Recall(AR)
all
0.50:0.95
0.241
all
0.50:0.95
0.384
all
0.50:0.95
0.420
small
0.50:0.95
0.157
medium
0.50:0.95
0.473
large
0.50:0.95
0.631
Evaluate model precision metrics on val2017 dataset and report scores for individual class labels. Below are the precision scores observed per class. Below, we have per class AP results on COCO dataset for the models YOLOX_S, YOLOX_tiny, YOLOX_nano respectively