The Lightweight Face Recognition Challenge & Workshop will be held in conjunction with the International Conference on Computer Vision (ICCV) 2019, Seoul Korea.
Please strictly follow the rules. For example, please use the same method for the FLOPs calculation regardless of your training framework is insightface or not.
The Lightweight Face Recognition Challenge has been supported by
EPSRC project FACER2VM (EP/N007743/1)
Kingsoft Cloud (3000$)
Dynamic funding pool: (17000$)
Cash sponsors and gift donations are welcome.
2019.06.21 We updated the groundtruth of Glint test dataset.
2019.06.04 We will clean the groundtruth on deepglint testset.
2019.05.21 Baseline models and training logs available.
2019.05.16 The four tracks (deepglint-light, deepglint-large, iQIYI-light, iQIYI-large) will equally share the dynamic funding pool (14000$). From each track, the top 3 players will share the funding pool for 50%, 30% and 20% respectively.
How To Start:
- Download ms1m-retinaface from baiducloud or dropbox and unzip it to
- Go into
- Refer to the
retinadataset configuration section in
sample_config.pyand copy it as your own configuration file
- Start training with
CUDA_VISIBLE_DEVICES='0,1,2,3' python -u train.py --dataset retina --network [your-network] --loss arcface. It will output the accuracy of lfw, cfp_fp and agedb_30 every 2000 batches by default.
- Putting the training dataset on SSD hard disk will achieve better training efficiency.
- Download testdata-image from baiducloud or dropbox. These face images are all pre-processed and aligned.
- To download testdata-video from iQIYI, please visit http://challenge.ai.iqiyi.com/data-cluster. You need to download iQIYI-VID-FACE.z01, iQIYI-VID-FACE.z02 and iQIYI-VID-FACE.zip after registration. These face frames are also pre-processed and aligned.
zip iQIYI_VID_FACE.zip -s=0 --out iQIYI_VID_FACE_ALL.zip; unzip iQIYI_VID_FACE_ALL.zip
- We can get a directory named
iQIYI_VID_FACEafter decompression. Then, we have to move
video_filelist.txtin testdata-image package to
iQIYI_VID_FACE/filelist.txt, to indicate the order of videos in our submission feature file.
- To generate image feature submission file: check
- To generate video feature submission file: check
- Submit binary feature to the right track of the test server.
You can also check the verification performance during training time on LFW,CFP_FP,AgeDB_30 datasets.
Final ranking is determined by the TAR under 1:1 protocal only, for all valid submissions.
For image testset, we evaluate the TAR under FAR@e-8 while we choose the TAR under FAR@e-4 for video testset.
- Network y2(a deeper mobilefacenet): 933M FLOPs. TAR_image: 0.64691, TAR_video: 0.47191
- Network r100fc(ResNet100FC-IR): 24G FLOPs. TAR_image: 0.80312, TAR_video: 0.64894