Skip to content


Repository files navigation

Python 3.5 Packagist Last Commit Maintenance Contributing


A new gesture-to-gesture translation framework. Gesture-to-Gesture Translation in the Wild via Category-Independent Conditional Maps, published in ACM International Conference on Multimedia, 2019.

1.Dataset preparing

More details >>>


We provide an user-friendly configuring method via Conda system, and you can create a new Conda environment using the command:

conda env create -f environment.yml


1.Download dataset and copy them into ./datasets

2.Modify the scripts to train/test:

  • Training
sh ./scripts/ <gpu_id>
sh ./scripts/ <gpu_id>
  • Testing
sh ./scripts/ <gpu_id>
sh ./scripts/ <gpu_id>

3.The pretrained model is saved at ./checkpoints/{model_name}. Check here for all the available TriangleGAN models.

4.We provide an implementation of GestureGAN, ACM MM 2018 [paper]|[code].

sh ./scripts/ <gpu_id>
sh ./scripts/ <gpu_id>


More Details >>>

5.Visual Results

More Details >>>


This code is based on the pytorch-CycleGAN-and-pix2pix. Thanks to the contributors of this project.

Related Work

Evaluation codes

We recommend to evaluate the performances of the compared models mainly based on this repo: GAN-Metrics


If you take use of our datasets or code, please cite our papers:

  title={Gesture-to-gesture translation in the wild via category-independent conditional maps},
  author={Liu, Yahui and De Nadai, Marco and Zen, Gloria and Sebe, Nicu and Lepri, Bruno},
  booktitle={Proceedings of the 27th ACM International Conference on Multimedia},

If you have any questions, please contact me without hesitation (yahui.liu AT