- Report
- How to reproduce results
- How to evaluate a model different from yolox
- How to setup Euler
- Euler commands
TODO
Installation
Step 1: Install YOLOX-Bees.
git clone https://github.com/AlessandroRuzzi/YOLOX-Bees
cd YOLOX-Bees
pip3 install -U pip && pip3 install -r requirements.txt
pip3 install -v -e . # or python3 setup.py develop
Step 2: Install pycocotools.
pip3 install cython; pip3 install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
Training
Step 1: Download a yolox pre-trained checkpoint from the table below and put it in the folder YOLOX-Bees/checkpoints
.
Model | size | mAPval 0.5:0.95 |
mAPtest 0.5:0.95 |
Speed V100 (ms) |
Params (M) |
FLOPs (G) |
weights |
---|---|---|---|---|---|---|---|
YOLOX-s | 640 | 40.5 | 40.5 | 9.8 | 9.0 | 26.8 | github |
YOLOX-m | 640 | 46.9 | 47.2 | 12.3 | 25.3 | 73.8 | github |
YOLOX-l | 640 | 49.7 | 50.1 | 14.5 | 54.2 | 155.6 | github |
YOLOX-x | 640 | 51.1 | 51.5 | 17.3 | 99.1 | 281.9 | github |
Step 2: Based on the checkpoint you downloaded you will choose a different experiment file. They are located in /YOLOX-Bees/exps/default/
and you can choose between yolox_s
, yolox_m
, yolox_l
and yolox_x
.
Step 3: Download from Azure the zip file /beelivingsensor/dslab2021/dslab2020_bee_detection_data_blurred/reproduce_results_dataset.zip
, unzip it and put all the folders inside the folder YOLOX_Bees/datasets/
.
Step 4: Run the following command to train yolox using a single GPU (it can only be trained with GPUs)
python tools/train.py -f exps/default/YOUR_EXP_FILE.py -d 1 -b 4 --fp16 -o -c checkpoints/YOUR_CHECKPOINT.pth
If you are using Euler cluster you can run:
bsub -W 24:00 -o log_test -R "rusage[mem=32000, ngpus_excl_p=1]" -R "select[gpu_model0==GeForceRTX2080Ti]" python tools/train.py -f exps/default/YOUR_EXP_FILE.py -d 1 -b 4 --fp16 -o -c checkpoints/YOUR_CHECKPOINT.pth
- -d: number of gpu devices
- -b: total batch size, the recommended number for -b is num-gpu * 8
- --fp16: mixed precision training
Step 5: Once the train ends (around 80/100 epochs should be enough) you will find in the folder /YOLOX-Bees/YOLOX_outputs/YOUR_EXP_NAME/
the best checkpoint (evaluated on the validation set) and the last epoch checkpoint.
Evaluation
Step 1: Download a yolox checkpoint from Azure /beelivingsensor/dslab2021/dslab2020_bee_detection_data_blurred/best_ckpt_yolox_l.pth
or use one checkpoint that you produced and put it in the folder YOLOX-Bees/checkpoints/
.
Step 2: Then if you haven't already done it, download from Azure the zip file /beelivingsensor/dslab2021/dslab2020_bee_detection_data_blurred/reproduce_results_dataset.zip
, unzip it and put all the folders inside the folder YOLOX_Bees/datasets/
.
Step 3: Open the file YOLOX-Bees/exps/default/yolox_bees_eval.py
and modify self.depth
and self.width
based on the checkpoint you have downloaded ( yolox_x -> [1.33, 1.25] , yolox_l -> [1.0, 1.0] , yolox_m -> [0.67, 0.75] , yolo_s -> [0.33, 0.50]] )
Step 4: Run the following command to obtain predictions for all the datasets
python evaluation.py image -f exps/default/yolox_bees_eval.py -c checkpoints/YOUR_CHECKPOINT.pth --tsize 832 --save_result --conf 0.05 --nms 0.8
Step 5:
At the end you will find a file called mAP_results.txt
together with an output file for each dataset in the folder YOLOX-Bees/map/output/
, while you will find images with bounding boxes predicted by the model in the folder YOLOX-Bees/YOLOX_outputs/yolox_bees_eval/
.
Create the detection-results files
Step 1: Use your model to create a separate detection-results text file for each image for each dataset.
Step 2: Use matching names for the files (e.g. image: "image_1.jpg", detection-results: "image_1.txt").
Step 3: In these files, each line should be in the following format:
<class_name> <confidence> <left> <top> <right> <bottom>
Step 4: E.g. "image_1.txt":
tvmonitor 0.471781 0 13 174 244
cup 0.414941 274 226 301 265
book 0.460851 429 219 528 247
chair 0.292345 0 199 88 436
book 0.269833 433 260 506 336
Step 5: Put all the files in the folder YOLOX-Bees/map/input/DATASET_NAME/detection-results
, where DATASET_NAME
can be for example Chueried_Hive01
.
To know all the datasets name you can refer to lines 30 - 41 of the file evaluation.py
.
At the end the folder YOLOX-Bees/map/input/
should have the following structure:
input
|——————Chueried_Hive01
| └——————detection-results
|
|——————ClemensRed
| └——————detection-results
|
|——————Doettingen_Hive1
| └——————detection-results
|
|——————Echolinde
| └——————detection-results
|
|——————Erlen_diago
| └——————detection-results
|
|——————Erlen_front
| └——————detection-results
|
|——————Erlen_Hive11
| └——————detection-results
|
|——————Erlen_smart
| └——————detection-results
|
|——————Froh14
| └——————detection-results
|
|——————Froh23
| └——————detection-results
|
|——————Hempbox
| └——————detection-results
|
|——————UnitedQueens
| └——————detection-results
Evaluation
Step 1: Download from Azure the zip file /beelivingsensor/dslab2021/dslab2020_bee_detection_data_blurred/reproduce_results_dataset.zip
, unzip it and put all the folders inside the folder YOLOX_Bees/datasets/
.(we need them to create ground truth labels).
Step 2: Run the following command to obtain predictions for all the datasets
python evaluation_no_yolox.py image -f exps/default/yolox_bees_eval.py --tsize 832
Step 3:
At the end you will find a file called mAP_results.txt
together with an output file for each dataset in the folder YOLOX-Bees/map/output/
.
Step 1: To login open the terminal and run the following command (you must use ETH VPN):
ssh ETH_USERNAME@euler.ethz.ch
Step 2: Run the following command in order to install modules:
env2lmod
module load eth_proxy gcc/6.3.0 python_gpu/3.8.5
Step 3: To move files from your PC to Euler or vice-versa you can use scp command.
Step 4: Go to the section How to reproduce results and follow the instructions.
1 bpeek
-> Use bpeek to check the output of the job you are interested in:
bpeek jobID
2 bkill
-> Use bkill to kill a specific job:
bkill jobID
3 bjobs
-> Use bjobs to check all your active/pending jobs:
bjobs
4 vim log_test
-> Use this command to check the log/errors of a completed/interrupted job:
vim log_test
- MegEngine in C++ and Python
- ONNX export and an ONNXRuntime
- TensorRT in C++ and Python
- ncnn in C++ and Java
- OpenVINO in C++ and Python
- The ncnn android app with video support: ncnn-android-yolox from FeiGeChuanShu
- YOLOX with Tengine support: Tengine from BUG1989
- YOLOX + ROS2 Foxy: YOLOX-ROS from Ar-Ray
- YOLOX Deploy DeepStream: YOLOX-deepstream from nanmi
- YOLOX MNN/TNN/ONNXRuntime: YOLOX-MNN、YOLOX-TNN and YOLOX-ONNXRuntime C++ from DefTruth
- Converting darknet or yolov5 datasets to COCO format for YOLOX: YOLO2COCO from Daniel
If you use YOLOX in your research, please cite our work by using the following BibTeX entry:
@article{yolox2021,
title={YOLOX: Exceeding YOLO Series in 2021},
author={Ge, Zheng and Liu, Songtao and Wang, Feng and Li, Zeming and Sun, Jian},
journal={arXiv preprint arXiv:2107.08430},
year={2021}
}