TCSVT 2023 [Paper]
Face inpainting aims at plausibly predicting missing pixels of face images within a corrupted region. Most existing methods rely on generative models learning a face image distribution from a big dataset, which produces uncontrollable results, especially with large-scale missing regions. To introduce strong control for face inpainting, we propose a novel reference-guided face inpainting method that fills the large-scale missing region with identity and texture control guided by a reference face image.
- The code has been tested with PyTorch 1.10.1 and Python 3.7.11. We train our model with a NIVIDA RTX3090 GPU.
Download our dataset celebID from BaiDuYun (password:5asv) | GoogleDrive and set the relevant paths in configs/config.yaml
and test.py
Download the pretrained Arcface model from BaiDuYun (password:ot7a) | GoogleDrive
Train a model, run:
python train.py
Download the pretrained model from BaiDuYun (password:spwk) | GoogleDrive. Generate inpainted results guided by different reference images, run:
python test.py
If you use this code for your research, please cite our paper.
@article{luo2023reference,
title={Reference-Guided Large-Scale Face Inpainting with Identity and Texture Control},
author={Luo, Wuyang and Yang, Su and Zhang, Weishan},
journal={IEEE Transactions on Circuits and Systems for Video Technology},
year={2023},
publisher={IEEE}
}
We use zllrunning's model to obtain face segmentation maps, 1adrianb's model to align face and detect landmarks, foamliu's model to compute Arcface loss.