YOLOv5 implementation using PyTorch
conda create -n YOLO python=3.8
conda activate YOLO
conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch-lts
pip install opencv-python==4.5.5.64
pip install PyYAML
pip install tqdm
- Configure your dataset path in
main.py
for training - Run
bash main.sh $ --train
for training,$
is number of GPUs
- Configure your dataset path in
main.py
for testing - Run
python main.py --test
for testing
Version | Epochs | Box mAP | Download |
---|---|---|---|
v5_n | 600 | 28.0 | model |
v5_n* | 300 | 27.6 | model |
v5_s* | 300 | 37.1 | model |
v5_m* | 300 | 44.7 | model |
v5_l* | 300 | 48.4 | model |
v5_x* | 300 | 50.0 | model |
*
means that weights are ported from original repo, see reference- To reproduce results, run
bash main.sh 2 --train --epochs 600
, seesteps.csv
for training log
├── COCO
├── images
├── train2017
├── 1111.jpg
├── 2222.jpg
├── val2017
├── 1111.jpg
├── 2222.jpg
├── labels
├── train2017
├── 1111.txt
├── 2222.txt
├── val2017
├── 1111.txt
├── 2222.txt