Official implementation of GANimation. In this work we introduce a novel GAN conditioning scheme based on Action Units (AU) annotations, which describe in a continuous manifold the anatomical facial movements defining a human expression. Our approach permits controlling the magnitude of activation of each AU and combine several of them. For more information please refer to the paper.
This code was made public to share our research for the benefit of the scientific community. Do NOT use it for immoral purposes.
- Install PyTorch (version 0.3.1), Torch Vision and dependencies from http://pytorch.org
- Install requirements.txt (
pip install -r requirements.txt
)
The code requires a directory containing the following files:
imgs/
: folder with all imageaus_openface.pkl
: dictionary containing the images action units.train_ids.csv
: file containing the images names to be used to train.test_ids.csv
: file containing the images names to be used to test.
An example of this directory is shown in sample_dataset/
.
To generate the aus_openface.pkl
extract each image Action Units with OpenFace and store each output in a csv file the same name as the image. Then run:
python data/prepare_au_annotations.py
To train:
bash launch/run_train.sh
To test:
python test --input_path path/to/img
If you use this code or ideas from the paper for your research, please cite our paper:
@inproceedings{pumarola2018ganimation,
title={GANimation: Anatomically-aware Facial Animation from a Single Image},
author={A. Pumarola and A. Agudo and A.M. Martinez and A. Sanfeliu and F. Moreno-Noguer},
booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
year={2018}
}