Make sure you have the following packages installed:
pip install tqdm
pip install scikit-learn
pip install scipy==1.8.1
Generally speaking, evaluation can be done with the following command:
python eval.py -m model_name -d dataset_name -dr dataset_root_dir
Supported datasets:
Please visit https://image-net.org/ to download the ImageNet dataset (only need images in ILSVRC/Data/CLS-LOC/val
) and the labels from caffe. Organize files as follow:
$ tree -L 2 /path/to/imagenet
.
├── caffe_ilsvrc12
│ ├── det_synset_words.txt
│ ├── imagenet.bet.pickle
│ ├── imagenet_mean.binaryproto
│ ├── synsets.txt
│ ├── synset_words.txt
│ ├── test.txt
│ ├── train.txt
│ └── val.txt
├── caffe_ilsvrc12.tar.gz
├── ILSVRC
│ ├── Annotations
│ ├── Data
│ └── ImageSets
├── imagenet_object_localization_patched2019.tar.gz
├── LOC_sample_submission.csv
├── LOC_synset_mapping.txt
├── LOC_train_solution.csv
└── LOC_val_solution.csv
Run evaluation with the following command:
python eval.py -m mobilenet -d imagenet -dr /path/to/imagenet
The script is modified based on WiderFace-Evaluation.
Please visit http://shuoyang1213.me/WIDERFACE to download the WIDERFace dataset Validation Images, Face annotations and eval_tools. Organize files as follow:
$ tree -L 2 /path/to/widerface
.
├── eval_tools
│ ├── boxoverlap.m
│ ├── evaluation.m
│ ├── ground_truth
│ ├── nms.m
│ ├── norm_score.m
│ ├── plot
│ ├── read_pred.m
│ └── wider_eval.m
├── wider_face_split
│ ├── readme.txt
│ ├── wider_face_test_filelist.txt
│ ├── wider_face_test.mat
│ ├── wider_face_train_bbx_gt.txt
│ ├── wider_face_train.mat
│ ├── wider_face_val_bbx_gt.txt
│ └── wider_face_val.mat
└── WIDER_val
└── images
Run evaluation with the following command:
python eval.py -m yunet -d widerface -dr /path/to/widerface
The script is modified based on evaluation of InsightFace.
This evaluation uses YuNet as face detector. The structure of the face bounding boxes saved in lfw_face_bboxes.npy is shown below. Each row represents the bounding box of the main face that will be used in each image.
[
[x, y, w, h, x_re, y_re, x_le, y_le, x_nt, y_nt, x_rcm, y_rcm, x_lcm, y_lcm],
...
[x, y, w, h, x_re, y_re, x_le, y_le, x_nt, y_nt, x_rcm, y_rcm, x_lcm, y_lcm]
]
x1, y1, w, h
are the top-left coordinates, width and height of the face bounding box, {x, y}_{re, le, nt, rcm, lcm}
stands for the coordinates of right eye, left eye, nose tip, the right corner and left corner of the mouth respectively. Data type of this numpy array is np.float32
.
Please visit http://vis-www.cs.umass.edu/lfw to download the LFW all images(needs to be decompressed) and pairs.txt(needs to be placed in the view2
folder). Organize files as follow:
$ tree -L 2 /path/to/lfw
.
├── lfw
│ ├── Aaron_Eckhart
│ ├── ...
│ └── Zydrunas_Ilgauskas
└── view2
└── pairs.txt
Run evaluation with the following command:
python eval.py -m sface -d lfw -dr /path/to/lfw
Please visit http://iapr-tc11.org/mediawiki/index.php/ICDAR_2003_Robust_Reading_Competitions to download the ICDAR2003 dataset and the labels.
$ tree -L 2 /path/to/icdar
.
├── word
│ ├── 1
│ │ ├── self
│ │ ├── ...
│ │ └── willcooks
│ ├── ...
│ └── 12
└── word.xml
Run evaluation with the following command:
python eval.py -m crnn -d icdar -dr /path/to/icdar
download zip file from http://www.iapr-tc11.org/dataset/ICDAR2003_RobustReading/TrialTrain/word.zip
upzip file to /path/to/icdar
python eval.py -m crnn -d icdar -dr /path/to/icdar
Please visit https://github.com/cv-small-snails/Text-Recognition-Material to download the IIIT5K dataset and the labels.
All the datasets in the format of lmdb can be evaluated by this script.
Run evaluation with the following command:
python eval.py -m crnn -d iiit5k -dr /path/to/iiit5k
Please download the mini_supervisely data from here which includes the validation dataset and unzip it.
Run evaluation with the following command :
python eval.py -m pphumanseg -d mini_supervisely -dr /path/to/pphumanseg
Run evaluation on quantized model with the following command :
python eval.py -m pphumanseg_q -d mini_supervisely -dr /path/to/pphumanseg