Skip to content

HarshWinterBytes/FaceRefiner

Repository files navigation

Overview

The overview of our proposed FaceRefiner. The inputs of FaceRefiner include the face image I, the 3D face reconstruction results (3D model M and camera pose P, sampled texture IS) and the initial imperfect texture IC produced by an existing facial texture generation method. The differentiable rendering-based style transfer is adopted to improve the quality of IC. The differentiable renderer is employed to produce rendered image IR of the inputted camera pose P. Then the rendering loss is calculated to measure the inconsistency between rendered and inputted image, and the gradients are back-propagated to a classical style transfer module containing style and content loss to optimize the facial texture IX.

Requirements

This implementation is tested under Ubuntu 22.04 environment with Nvidia GPUs 3090

python 3.7
CUDA 11.1
pytorch 1.8.1

Installation

1. Clone the repository and set up a conda environment as follows:

git clone https://github.com/HarshWinterBytes/FaceRefiner
cd FaceRefiner
conda env create -f environment.yml
conda activate face_refiner
pip install torch==1.8.1+cu111 torchvision==0.9.1+cu111 torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html

2. Installation of Deep3DFaceRecon_pytorch

  • 2.a. Install Nvdiffrast library:
cd external/deep3dfacerecon/    
git clone https://github.com/NVlabs/nvdiffrast.git
pip install .
  • 2.b. Install Arcface Pytorch:
git clone https://github.com/deepinsight/insightface.git
cp -r ./insightface/recognition/arcface_torch/ ./models/
  • 2.c. Prepare prerequisite models: Deep3DFaceRecon_pytorch method uses Basel Face Model 2009 (BFM09) to represent 3d faces. Get access to BFM09 using this link. After getting the access, download "01_MorphableModel.mat" and "BFM_model_front.mat". In addition, we use an Expression Basis provided by Guo et al.. Download the Expression Basis (Exp_Pca.bin) using this link (google drive). Organize all files into the following structure:
FaceRefiner
│
└─── external
     │
     └─── deep3dfacerecon_pytorch
          │
          └─── BFM
              │
              └─── 01_MorphableModel.mat
              │
              └─── BFM_model_front.mat
              │
              └─── Exp_Pca.bin
              |
              └─── ...
  • 2.d. Deep3DFaceRecon_pytorch provides a model trained on a combination of CelebA, LFW, 300WLP, IJB-A, LS3D-W, and FFHQ datasets. Download the pre-trained model using this link (google drive) and organize the directory into the following structure:
FaceRefiner
│
└─── external
     │
     └─── deep3dfacerecon_pytorch
          │
          └─── checkpoints
               │
               └─── face_recon
                   │
                   └─── epoch_latest.pth

  • 2.e. Download the pre-trained model from Arcface using this link. By default, we use the resnet50 backbone (ms1mv3_arcface_r50_fp16), organize the download files into the following structure:
FaceRefiner
│
└─── external
     │
     └─── deep3dfacerecon_pytorch
          │
          └─── checkpoints
               │
               └─── recog_model
                    │
                    └─── ms1mv3_arcface_r50_fp16
                         |
                         └─── backbone.pth

3. Installation of face3d

cd external/face3d/mesh/cython
python setup.py build_ext -i 

Usage

  • Run example of figure 7 (content images are from Deep3DFace) and figure 8 (content images are from OSTeC) from original paper.
sh run.sh

Acknowledgement

  • Our projection relies on futscdav's STROTSS
  • Thanks OSTEC for providing face visibility maps and content images
  • Thanks Deep3DFaceRecon_pytorch for providing 3d face reconstruction and content images
  • We use MTCNN for face detection
  • We use face3d for uv face rendering

Citation

If you find this work is useful for your research, please cite our paper:

@ARTICLE{10443565,
  author={Li, Chengyang and Cheng, Baoping and Cheng, Yao and Zhang, Haocheng and Liu, Renshuai and Zheng, Yinglin and Liao, Jing and Cheng, Xuan},
  journal={IEEE Transactions on Multimedia}, 
  title={FaceRefiner: High-Fidelity Facial Texture Refinement with Differentiable Rendering-based Style Transfer}, 
  year={2024},
  volume={},
  number={},
  pages={1-14},
  keywords={Faces;Three-dimensional displays;Rendering (computer graphics);Image reconstruction;Face recognition;Solid modeling;Cameras;facial texture generation;3D face reconstruction;style transfer},
  doi={10.1109/TMM.2024.3361728}}


About

Official PyTorch implementation of FaceRefiner: High-Fidelity Facial Texture Refinement with Differentiable Rendering-based Style Transfer

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published