Skip to content

3P2S/arcface

Repository files navigation

ArcFace Tensorflow 2

🔥 ArcFace (Additive Angular Margin Loss for Deep Face Recognition, published in CVPR 2019) implemented in Tensorflow 2.0+. This is an unofficial implementation. 🔥

Additive Angular Margin Loss(ArcFace) has a clear geometric interpretation due to the exact correspondence to the geodesic distance on the hypersphere, and consistently outperforms the state-of-the-art and can be easily implemented with negligible computational overhead.

Original Paper:   Arxiv   CVPR2019

Offical Implementation:   MXNet


Contents

Data Preparing

All datasets used in this repository can be found from face.evoLVe.PyTorch's Data-Zoo.

Note:

  • Both training and testing dataset are "Align_112x112" version.

Training Dataset

Download MS-Celeb-1M datasets, then extract and convert them to tfrecord as traning data as following.

# Binary Image: convert really slow, but loading faster when traning.
python data/convert_train_binary_tfrecord.py --dataset_path="/path/to/ms1m_align_112/imgs" --output_path="./data/ms1m_bin.tfrecord"

# Online Image Loading: convert really fast, but loading slower when training.
python data/convert_train_tfrecord.py --dataset_path="/path/to/ms1m_align_112/imgs" --output_path="./data/ms1m.tfrecord"

Note:

  • You can run python ./dataset_checker.py to check if the dataloader work.

Testing Dataset

Download LFW, Aged30 and CFP-FP datasets, then extract them to /your/path/to/test_dataset. These testing data are already binary files, so it's not necessary to do any preprocessing. The directory structure should be like bellow.

/your/path/to/test_dataset/
    -> lfw_align_112/lfw
        -> data/
        -> meta/
        -> ...
    -> agedb_align_112/agedb_30
        -> ...
    -> cfp_align_112/cfp_fp
        -> ...

Training and Testing

You can modify your own dataset path or other settings of model in ./configs/*.yaml for training and testing, which like below.

# general (shared both in training and testing)
batch_size: 128
input_size: 112
embd_shape: 512
sub_name: 'arc_res50'
backbone_type: 'ResNet50' # or 'MobileNetV2'
head_type: ArcHead # or 'NormHead': FC to targets.
is_ccrop: False # central-cropping or not

# train
train_dataset: './data/ms1m_bin.tfrecord' # or './data/ms1m.tfrecord'
binary_img: True # False if dataset is online decoding
num_classes: 85742
num_samples: 5822653
epochs: 5
base_lr: 0.01
w_decay: !!float 5e-4
save_steps: 1000

# test
test_dataset: '/your/path/to/test_dataset'

Note:

  • The sub_name is the name of outputs directory used in checkpoints and logs folder. (make sure of setting it unique to other models)
  • The head_type is used to choose ArcFace head or normal fully connected layer head for classification in training. (see more detail in ./modules/models.py)
  • The is_ccrop means doing central-cropping on both trainging and testing data or not.
  • The binary_img is used to choose the type of training data, which should be according to the data type you created in the Data-Preparing.

Training

Here have two modes for training your model, which should be perform the same results at the end.

# traning with tf.GradientTape(), great for debugging.
python train.py --mode="eager_tf" --cfg_path="./configs/arc_res50.yaml"

# training with model.fit().
python train.py --mode="fit" --cfg_path="./configs/arc_res50.yaml"

Testing

You can download my trained models for testing from Benchmark and Models without training it yourself.

python evaluate.py 

Build own face dataset

python take_pic.py -o path/to/dataset -n name

Run inference

python infer.py --update True

Benchmark and Models

Verification results (%) of different backbone, head tpye, data augmentation and loss function.

Backbone Head Loss CCrop LFW AgeDB-30 CFP-FP Download Link
ResNet50 ArcFace Softmax False 99.35 95.03 90.36 GoogleDrive
MobileNetV2 ArcFace Softmax False 98.67 90.87 88.51 GoogleDrive
ResNet50 ArcFace Softmax True 99.28 94.82 93.14 GoogleDrive
MobileNetV2 ArcFace Softmax True 98.50 91.43 89.44 GoogleDrive

Note:

  • The 'CCrop' tag above means doing central-cropping on both trainging and testing data, which could eliminate the redundant boundary of intput face data (especially for AgeDB-30).
  • All training settings of the models can be found in the corresponding ./configs/*.yaml files.

Create own dataset

References

Thanks for these source codes porviding me with knowledges to complete this repository.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages