Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Official YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors #2557

Open
AlexeyAB opened this issue Jul 7, 2022 · 5 comments

Comments

@AlexeyAB
Copy link
Collaborator

AlexeyAB commented Jul 7, 2022

Official YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors


Darknet cfg/weights file - currently tested for inference only:

Test FPS on: https://github.com/AlexeyAB/darknet

  • without NMS: darknet.exe detector demo cfg/coco.data yolov7-tiny.cfg yolov7-tiny.weights test.mp4 -benchmark

  • with NMS: darknet.exe detector demo cfg/coco.data yolov7-tiny.cfg yolov7-tiny.weights test.mp4 -dont_show


YOLOv7 is more accurate and faster than YOLOv5 by 120% FPS, than YOLOX by 180% FPS, than Dual-Swin-T by 1200% FPS, than ConvNext by 550% FPS, than SWIN-L by 500% FPS, than PPYOLOE-X by 150% FPS.

YOLOv7 surpasses all known object detectors in both speed and accuracy in the range from 5 FPS to 160 FPS and has the highest accuracy 56.8% AP among all known real-time object detectors with 30 FPS or higher on GPU V100, batch=1.

  • YOLOv7-e6 (55.9% AP, 56 FPS V100 b=1) by +500% FPS faster than SWIN-L C-M-RCNN (53.9% AP, 9.2 FPS A100 b=1)
  • YOLOv7-e6 (55.9% AP, 56 FPS V100 b=1) by +550% FPS faster than ConvNeXt-XL C-M-RCNN (55.2% AP, 8.6 FPS A100 b=1)
  • YOLOv7-w6 (54.6% AP, 84 FPS V100 b=1) by +120% FPS faster than YOLOv5-X6-r6.1 (55.0% AP, 38 FPS V100 b=1)
  • YOLOv7-w6 (54.6% AP, 84 FPS V100 b=1) by +1200% FPS faster than Dual-Swin-T C-M-RCNN (53.6% AP, 6.5 FPS V100 b=1)
  • YOLOv7x (52.9% AP, 114 FPS V100 b=1) by +150% FPS faster than PPYOLOE-X (51.9% AP, 45 FPS V100 b=1)
  • YOLOv7 (51.2% AP, 161 FPS V100 b=1) by +180% FPS faster than YOLOX-X (51.1% AP, 58 FPS V100 b=1)

image


image


yolov7_640_1280

@AlexeyAB AlexeyAB pinned this issue Jul 7, 2022
@AlexeyAB AlexeyAB changed the title YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors Official YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors Jul 7, 2022
@ullrichthomas92
Copy link

I tried to train a custom dataset with yolov7-tiny and conv.87
Unfortunately it gets stuck at a loss above 100. Same configuration in yolov4-tiny works great.
Can you please help?

@ESJavadex
Copy link

Are you going to launch the full yolov7.weights and yolov7.cfg? Or are there any option to convert it from .pt?

@jhony2507
Copy link

Good Morning,
I'm using yolov7 to detect diseases in papaya, but the results are horrible. I have approximately 20k samples, divided into 8 diseases, the annotations are correct and I still get a max mAP of 34% (this after over 20,000 iterations). This result is slightly WORSE than yolov4, which achieves a mAP of 37%.
Does anyone have an idea what could be wrong?
*** With the efficientDet-d3 I get a mAP of 65% for the same base (I don't have the computational capacity to use the D6,D7..)

Attached are the yolov4/v7 configuration files and sample images
amostras
yolov4-custom_cfg.txt
yolov7-papaya_cfg.txt

@developer239
Copy link

developer239 commented Sep 28, 2022

@AlexeyAB @ESJavadex @jhony2507 I would also like to know what how do I get the weights file for yolov7? Can I convert .pt to .weights somehow? (I guess not)

How do I train .weights then? How is it different from training V3/V4? 🙏

I guess we can convert .pt to . onnx though. 🤔

@rishav1122
Copy link

I have the yolov7-w6 weights file in darknet format best.weights, Is there any way to convert it in pt. Also, is there a yolov7-W6 config file available (yolov7-w6.cfg)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants