- Pytorch implementation of the paper: "FaceNet: A Unified Embedding for Face Recognition and Clustering".
- Training of network is done using triplet loss.
- This work is modified in some functionality from the original work by Taebong Moon and then retrained for the purpose of completing my BS degree. The full report can be found at this folder: full-and-paper-report
- To use the pretrained model please refer to this repo: https://github.com/khrlimam/res-facenet
- If you wish to try the demo app please clone this repo and follow the installation instruction: https://github.com/khrlimam/demo-facenet
-
Download vggface2 (for training) and lfw (for validation) datasets.
-
Align face image files by following David Sandberg's instruction (part of "Face alignment").
-
Write list file of face images by running "datasets/write_csv_for_making_dataset.py"
python write_csv_for_making_dataset.py --root-dir=/path/to/dataset/dir --final-file=dataset.csv
datasets/write_csv_for_making_dataset.py
is multiprocess version of previous.ipynb
. This way generating csv dataset is much faster.
-
Train
- Again, one need to modify paths in accordance with location of image dataset.
- Also feel free to change some parameters.
- Accuracy on VGGFace2 and LFW datasets
- Triplet loss on VGGFace2 and LFW datasets
- ROC curve on LFW datasets for validation
- True counts on each threshold