Skip to content
GANimation: Anatomically-aware Facial Animation from a Single Image (ECCV'18 Oral) [PyTorch]
Python Shell
Branch: master
Clone or download
Latest commit 40da9ae Aug 27, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
data Dataloader path bug MAC solved Sep 18, 2018
launch initial release Jul 23, 2018
models make more clear no model Aug 1, 2018
networks initial release Jul 23, 2018
options Restore checkpoint epoch label split bug solved Nov 10, 2018
sample_dataset added one extra test sample Aug 10, 2018
utils initial release Jul 23, 2018
LICENSE Create LICENSE Sep 6, 2018
README.md Update README.md Aug 27, 2019
requirements.txt initial release Jul 23, 2018
test.py initial release Jul 23, 2018
train.py initial release Jul 23, 2018

README.md

GANimation: Anatomically-aware Facial Animation from a Single Image

[Project] [Paper]

Official implementation of GANimation. In this work we introduce a novel GAN conditioning scheme based on Action Units (AU) annotations, which describe in a continuous manifold the anatomical facial movements defining a human expression. Our approach permits controlling the magnitude of activation of each AU and combine several of them. For more information please refer to the paper.

This code was made public to share our research for the benefit of the scientific community. Do NOT use it for immoral purposes.

GANimation

Prerequisites

  • Install PyTorch (version 0.3.1), Torch Vision and dependencies from http://pytorch.org
  • Install requirements.txt (pip install -r requirements.txt)

Data Preparation

The code requires a directory containing the following files:

  • imgs/: folder with all image
  • aus_openface.pkl: dictionary containing the images action units.
  • train_ids.csv: file containing the images names to be used to train.
  • test_ids.csv: file containing the images names to be used to test.

An example of this directory is shown in sample_dataset/.

To generate the aus_openface.pkl extract each image Action Units with OpenFace and store each output in a csv file the same name as the image. Then run:

python data/prepare_au_annotations.py

Run

To train:

bash launch/run_train.sh

To test:

python test --input_path path/to/img

Citation

If you use this code or ideas from the paper for your research, please cite our paper:

@article{Pumarola_ijcv2019,
    title={GANimation: One-Shot Anatomically Consistent Facial Animation},
    author={A. Pumarola and A. Agudo and A.M. Martinez and A. Sanfeliu and F. Moreno-Noguer},
    booktitle={International Journal of Computer Vision (IJCV)},
    year={2019}
}
You can’t perform that action at this time.