Skip to content

icaoyu/ID-Inpainter

Repository files navigation

Recovery-based Occluded Face Recognition by Identity-guided Inpainting — Official PyTorch Implementation

examples

Official Implementation of [Recovery-based Occluded Face Recognition by Identity-guided Inpainting] with Pytorch-Lightning. In the paper,we propose ID-Inpainter, an identity-guided face inpainting model, which preserves the identity information to the greatest extent through a more accurate identity sampling strategy and a GAN-like fusing network.

Datasets

Preparing Data

You need to download and unzip:

Data preprocessing

Preprocessing code is mainly based on retinaFace. Please processing FFHQ,CelebA,VGGFace by running:

python preprocess112.py --root ./data/CelebA/raw/ --output_dir ./data/CelebA/celeba112/
python preprocess112.py --root ./data/FFHQ/raw/ --output_dir ./data/FFHQ/celeba112/
python preprocess112.py --root ./data/VGGFace/raw/ --output_dir ./data/VGGFace/celeba112/

processing FLW by running:

python preprocess112.py --root ./data/LFWs/raw/ --output_dir ./data/LFWs/lfw_112/ --has_subdirs

You may modify our preprocess with multi-processing functions to finish pre-processing step much faster.

Training

Configuration

There is yaml file in the config folder. They must be edited to match your training requirements (dataset, metadata, etc.).

  • config/train.yaml: Configs for training Fusor-Net.
    • Fill in the blanks of: dataset_dir, valset_dir
    • You may want to change: batch_size for GPUs other than 32GB V100, or chkpt_dir to save checkpoints in other disk.

Training Fusor_Net:

#step1:set id_weight = 5 in loss.py, set dataset_dir and valset_dir in train.yaml, running:
python fusor_trainer.py -n ffhq -e 3 

#step2:set id_weight = 7 in loss.py,set dataset_dir and valset_dir in train.yaml,running:
python fusor_trainer.py -n celeba -e 6 -p './chkpt/ffhq/ffhq_last_2.ckpt'

#step3:set id_weight = 9 in loss.py,set dataset_dir and valset_dir in train.yaml,running:
python fusor_trainer.py -n vgg -e 10 -p './chkpt/celeba/celeba_last_5.ckpt'

Pre-trained models

During the training process, pre-trained models are offered. you can download and put into locations according to train.yaml.

Monitoring via Tensorboard

The progress of training with loss values and validation output can be monitored with Tensorboard. By default, the logs will be stored at log, which can be modified by editing log.log_dir parameter at config yaml file.

tensorboard --log_dir log --bind_all # Scalars, Images, Hparams, Projector will be shown.

Inpainting

To inpainting a simgle face, run this command:

python inpainting.py --checkpoint_path chkpt/vgg220531_last_10.ckpt --image_path output/example/example1.png --mask_path output/example/mask1.png --output_path output/result/result1.png

Recognizing

Please refer to ./prepare_inpainted_datasets.py for the inpainted datasets generation.

# build random-part inpainted results of flw_112

python prepare_inpainted_datasets.py --root data/LFWs/20220712/ --unocc_dir data/LFWs/lfw_112/ -m random_part -f ca_id_fill

To evaluate the recognization proformance, run this command:

python lfw_arc_test.py

You may modify the paths config.DATASET.LFW_PATH, config.DATASET.LFW_OCC_PATH, config.DATASET.LFW_PAIRS with the pair of datasets you want to evaluate.

To plot the ROC curve to visualize the performance improvements, call the function plot() instead of main() in this command:

python lfw_arc_test.py

To compare the intervals between the average similarity of identical and crossed identity pairs, run this command:

# Note to change the paths of your expected datasets first
python lfw_arc_res50_similarity_gap.py

Results

Comparison with results from original paper

Figure in the original paper

Our Results

License

Implementation Author

Honglei Li, Yifan Zhang @ MINDs Lab, Inc. (l_honglei@foxmail.com)

Paper Information

About

Identity-guided occluded face inpainting

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages