Skip to content
Branch: master
Find file History
Latest commit 9461a2f Jun 28, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information. Update Jun 28, 2019 support py3 for arcface Jun 5, 2019 support py3 for arcface Jun 5, 2019

The Lightweight Face Recognition Challenge & Workshop will be held in conjunction with the International Conference on Computer Vision (ICCV) 2019, Seoul Korea.

Please strictly follow the rules. For example, please use the same method for the FLOPs calculation regardless of your training framework is insightface or not.

Test Server


The Lightweight Face Recognition Challenge has been supported by

EPSRC project FACER2VM (EP/N007743/1)

Huawei (5000$)

DeepGlint (3000$)

iQIYI (3000$)

Kingsoft Cloud (3000$)

Pensees (3000$)

Dynamic funding pool: (17000$)

Cash sponsors and gift donations are welcome.


Discussion Group

For Chinese:


For English:

(in #lfr2019 channel)


2019.06.21 We updated the groundtruth of Glint test dataset.

2019.06.04 We will clean the groundtruth on deepglint testset.

2019.05.21 Baseline models and training logs available.

2019.05.16 The four tracks (deepglint-light, deepglint-large, iQIYI-light, iQIYI-large) will equally share the dynamic funding pool (14000$). From each track, the top 3 players will share the funding pool for 50%, 30% and 20% respectively.


How To Start:


  1. Download ms1m-retinaface from baiducloud or dropbox and unzip it to $INSIGHTFACE_ROOT/datasets/
  2. Go into $INSIGHTFACE_ROOT/recognition/
  3. Refer to the retina dataset configuration section in and copy it as your own configuration file
  4. Start training with CUDA_VISIBLE_DEVICES='0,1,2,3' python -u --dataset retina --network [your-network] --loss arcface. It will output the accuracy of lfw, cfp_fp and agedb_30 every 2000 batches by default.
  5. Putting the training dataset on SSD hard disk will achieve better training efficiency.


  1. Download testdata-image from baiducloud or dropbox. These face images are all pre-processed and aligned.
  2. To download testdata-video from iQIYI, please visit You need to download iQIYI-VID-FACE.z01, iQIYI-VID-FACE.z02 and after registration. These face frames are also pre-processed and aligned.
    1. Unzip: zip -s=0 --out; unzip
    2. We can get a directory named iQIYI_VID_FACE after decompression. Then, we have to move video_filelist.txt in testdata-image package to iQIYI_VID_FACE/filelist.txt, to indicate the order of videos in our submission feature file.
  3. To generate image feature submission file: check
  4. To generate video feature submission file: check
  5. Submit binary feature to the right track of the test server.

You can also check the verification performance during training time on LFW,CFP_FP,AgeDB_30 datasets.


Final ranking is determined by the TAR under 1:1 protocal only, for all valid submissions.

For image testset, we evaluate the TAR under FAR@e-8 while we choose the TAR under FAR@e-4 for video testset.


  1. Network y2(a deeper mobilefacenet): 933M FLOPs. TAR_image: 0.64691, TAR_video: 0.47191
  2. Network r100fc(ResNet100FC-IR): 24G FLOPs. TAR_image: 0.80312, TAR_video: 0.64894

Baseline models download link: baidu cloud dropbox

Training logs: baidu cloud dropbox


Candidate solutions:

  1. Manually design or automatically search different networks/losses.
  2. Use slightly deeper or wider mobile-level networks.
  3. OctConv, to reduce FLOPs.
  4. HRNet, for large FLOPs track. and so on
You can’t perform that action at this time.