Skip to content

FAQ(Continuously updated)

Chilicyy edited this page Sep 6, 2022 · 9 revisions

General Questions

Question 1. Is there an official team constantly working on the maintainance of YOLOv6?

  • Yes, a team from Meituan Vision AI Department started and will continually build up the YOLOv6 product. Additionally, we welcome all developers to join us as contributors of YOLOv6.

Question 2. Will there be an official Paper of YOLOv6?

  • There will be an official Technical Report of YOLOv6 coming soon.

Question 3. How to utilize /deployment YOLOv6?

About Training

Question1. How to check if my training procedure is fine?

  • Please check your training procedure by following tips:

    • Use tensorboard to check if loss is decrease;
    • Use tensorboard to check if the input train_batch is correctly annotated, and check if predictions on validation dataset is reasonable;
    • Check if mAP rises during trainging.

Queston 2. How to reproduce results on COCO datasets?

  • Please reproduce results on COCO datasets by following the training commands from README, for example:
python -m torch.distributed.launch --nproc_per_node 8 tools/train.py --batch 256 --conf configs/yolov6s.py --data data/coco.yaml --device 0,1,2,3,4,5,6,7

(2022/09/06 update!!) Now you can refer to the training commands from README.md #Reproduce our results on COCO (this part).

Question 3. What's the training time of YOLOv6 under different configurations?

  • YOLOv6n with 4 A100, batch size 128, epoch 300 takes about one day to train

  • YOLOv6s with 8 A100, batch size 256, epoch 300 takes about one day to train

About Testing

Question 1. Is there any plan to support PyTorch Hub and integrate demos on Google Colab ?

About Deployment

Question 1. What frameworks are currently supported for inference and deployment?

  • Please check on the tutorials on ONNX, TensorRT and OpenCV, you can use one of the mentioned frameworks to deploy your model. In addition, you can convert models from ONNX to the framework you'd like to use.

Question 2. How to solve the problem of drop accuracy of INT8 quantization?

  • Models contain RepVGG will drop much accuracy with INT8 quantization. However, train the model with RepOpt will alleviate the problem. More details can be found here.
Clone this wiki locally