Skip to content

Latest commit

 

History

History
 
 

ppyoloe

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

PPYOLOE

Abstract

PP-YOLOE is an excellent single-stage anchor-free model based on PP-YOLOv2, surpassing a variety of popular YOLO models. PP-YOLOE has a series of models, named s/m/l/x, which are configured through width multiplier and depth multiplier. PP-YOLOE avoids using special operators, such as Deformable Convolution or Matrix NMS, to be deployed friendly on various hardware.

PPYOLOE-PLUS-l model structure

Results and models

PPYOLOE+ COCO

Backbone Arch Size Epoch SyncBN Mem (GB) Box AP Config Download
PPYOLOE+ -s P5 640 80 Yes 4.7 43.5 config model | log
PPYOLOE+ -m P5 640 80 Yes 8.4 49.5 config model | log
PPYOLOE+ -l P5 640 80 Yes 13.2 52.6 config model | log
PPYOLOE+ -x P5 640 80 Yes 19.1 54.2 config model | log

Note:

  1. The above Box APs are all models with the best performance in COCO
  2. The gap between the above performance and the official release is about 0.3. To speed up training in mmyolo, we use pytorch to implement the image resizing in PPYOLOEBatchRandomResize for multi-scale training, while official PPYOLOE use opencv. And lanczos4 is not yet supported in PPYOLOEBatchRandomResize. The above two reasons lead to the gap. We will continue to experiment and address the gap in future releases.
  3. The mAP of the non-Plus version needs more verification, and we will update more details of the non-Plus version in future versions.
@article{Xu2022PPYOLOEAE,
  title={PP-YOLOE: An evolved version of YOLO},
  author={Shangliang Xu and Xinxin Wang and Wenyu Lv and Qinyao Chang and Cheng Cui and Kaipeng Deng and Guanzhong Wang and Qingqing Dang and Shengyun Wei and Yuning Du and Baohua Lai},
  journal={ArXiv},
  year={2022},
  volume={abs/2203.16250}
}