Skip to content
Convolutional Mesh Autoencoders for Generating 3D Faces
Branch: master
Clone or download
Latest commit 9686261 Mar 3, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
lib Delete visualize_latent_space.pyc Sep 4, 2018
LICENSE dev commit Aug 9, 2018 Update Mar 2, 2019 new models Feb 11, 2019 dev commit Aug 9, 2018
requirements.txt dev commit Aug 9, 2018

CoMA: Convolutional Mesh Autoencoders

Generating 3D Faces using Convolutional Mesh Autoencoders

This is an official repository of Generating 3D Faces using Convolutional Mesh Autoencoders

[Project Page][Arxiv]


This code is tested on Tensorflow 1.3. Requirements (including tensorflow) can be installed using:

pip install -r requirements.txt

Install mesh processing libraries from MPI-IS/mesh.


Download the data from the Project Page.

Preprocess the data

python --data <PATH_OF_RAW_DATA> --save_path <PATH_TO_SAVE_PROCESSED DATA>

Data pre-processing creates numpy files for the interpolation experiment and extrapolation experiment (Section X of the paper). This creates 13 different train and test files. sliced_[train|test] is for the interpolation experiment. <EXPRESSION>_[train|test] are for cross validation cross 12 different expression sequences.


To train, specify a name, and choose a particular train test split. For example,

python --data data/sliced --name sliced


To test, specify a name, and data. For example,

python --data data/sliced --name sliced --mode test

Reproducing results in the paper

Run the following script. The models are slightly better (~1% on average) than ones reported in the paper.



To sample faces from the latent space, specify a model and data. For example,

python --data data/sliced --name sliced --mode latent

A face template pops up. You can then use the keys qwertyui to sample faces by moving forward in each of the 8 latent dimensions. Use asdfghjk to move backward in the latent space.

For more flexible usage, refer to lib/


We thank Raffi Enficiaud and Ahmed Osman for pushing the release of psbody.mesh, an essential dependency for this project.


The code contained in this repository is under MIT License and is free for commercial and non-commercial purposes. The dependencies, in particular, MPI-IS/mesh and our data have their own license terms which can be found on their respective webpages. The dependencies and data are NOT covered by MIT License associated with this repository.

When using this code, please cite

Anurag Ranjan, Timo Bolkart, Soubhik Sanyal, and Michael J. Black. "Generating 3D faces using Convolutional Mesh Autoencoders." ECCV 2018.

You can’t perform that action at this time.