Skip to content

dafei-qin/NFR_pytorch

Repository files navigation

NFR

This is the official implementation of the paper 'Neural Face Rigging for Animating and Retargeting Facial Meshes in the Wild'

Why try NFR?

NFR can transfer facial animations to any customized face mesh, even with different topology, without any labor for manual rigging or data capturing. For facial meshes obtained from any source, you can quickly retarget exising animations onto the mesh and see the results in real-time.

Testing Setup

This release is tested under Ubuntu 20.04, with a RTX 4090 GPU. Other GPU models with CUDA should be OK as well.

The testing module utilizes vedo for interactive visualization. Thus a display is required.

Windows is currently not supported unless you manually install the pytorch3d package following their official guide.

  1. Create an environment called NFR
conda create -n NFR python=3.9
conda activate NFR
  1. Recommend mamba to accelerate the installation process
conda install mamba -c conda-forge
  1. Install necessary packages via mamba
mamba install pytorch=1.12.1 cudatoolkit=11.3 pytorch-sparse=0.6.15 pytorch3d=0.7.1 cupy=11.3 numpy=1.23.5 -c pytorch -c conda-forge -c pyg -c pytorch3d
  1. Install necessary packages via pip
pip install potpourri3d trimesh open3d transforms3d libigl robust_laplacian vedo
  1. Download the preprocess data and the pretrained model here: Google Drive. Place them in the root directory of this repo.

  2. Run!

python test_user.py -c config/test.yml
  1. Interactive visualization

Here's the plot when you successfully run the script. You can interact with the sliders and buttons to change the expression of the source mesh, and manually adjust the expression by FACS-like codes.

  • Zone 0: The source mesh
  • Zone 1: The target mesh (with source mesh's expression transferred)
  • Zone 2: The source mesh under ICT Blendshape space
  • Zone 3: Interactive buttons and sliders
    • Buttons:
      • code_idx: input (0-52) the FACS code index to the terminal
      • input/next/random: change the source expression index
      • iden: change the source identity
    • Sliders:
      • AU scale: Change the intensity of the FACS code specified by code_idx
      • scale: Scale uniformly the target mesh
      • x/y/z shift: Shift the target mesh

Pre-processed facial animation sequences

Currently we have two pre-processed facial animation sequences, one from ICT and another from Multiface. You can swith between them by changing the dataset and data_head variables in the config/test.yml file.

Using your customized data

You can test with your own mesh as the target. This has two requirement:

  1. There should be no mouth/eye/nose sockets and eye balls inside the face. Otherwise bad deformations may occur on those area.
  2. The mouth and eyes need to be cut for correct global solving. Please refer to the preprocessed meshes in the test-mesh folder as examples.
  3. Remember to roughly align your mesh to the examples in blender via the align.blend file!

Training

The training module will be released later.

Citation

@inproceedings{qin2023NFR,
          author = {Qin, Dafei and Saito, Jun and Aigerman, Noam and Groueix Thibault and Komura, Taku},
          title = {Neural Face Rigging for Animating and Retargeting Facial Meshes in the Wild},
          year = {2023},
          booktitle = {SIGGRAPH 2023 Conference Papers},
      }

Acknowledgement

This project uses code from ICT, Multiface, Diffusion-Net, data from ICT and Multiface, testing mesh templates from ICT, Multiface, COMA, FLAME, MeshTalk. Thank you!

About

Offical repo of "Neural Face Rigging for Animating and Retargeting Facial Meshes in the Wild"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages