Skip to content
Extreme 3D Face Reconstruction: Looking Past Occlusions
C++ Python Makefile CMake C Dockerfile Shell
Branch: master
Clone or download
Latest commit d4af3b8 Oct 24, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
3DMM_model First release Dec 15, 2017
CNN First release Dec 15, 2017
cmake First release Dec 15, 2017
config Convert to Dockerfile Dec 7, 2018
data First release Dec 15, 2017
demoCode Update bumpMapRegressor.py Sep 6, 2019
dlib_model First release Dec 15, 2017
lib/3rdParty/Eigen Convert to Dockerfile Dec 7, 2018
modules Convert to Dockerfile Dec 7, 2018
output First release Dec 15, 2017
CMakeLists.txt Merge pull request #19 from apoorva-sriv/patch-4 Oct 24, 2019
Dockerfile Update Dockerfile Jun 27, 2019
LICENSE.txt Create LICENSE.txt Apr 12, 2018
README.md Update README.md Dec 7, 2018
extreme_3d_teaser.png Add files via upload Dec 7, 2018
main.cpp Convert to Dockerfile Dec 7, 2018

README.md

Extreme 3D Face Reconstruction: Seeing Through Occlusions

Please note that the main part of the code has been released, though we are still testing it to fix possible glitches. Thank you.

Python and C++ code for realistic 3D face modeling from single image using our shape and detail regression networks published in CVPR 2018 [1] (follow the link to our PDF which has many, many more reconstruction results.)

This page contains end-to-end demo code that estimates the 3D facial shape with realistic details directly from an unconstrained 2D face image. For a given input image, it produces standard ply files of the 3D face shape. It accompanies the deep networks described in our paper [1] and [2]. The occlusion recovery code, however, will be published in a future release. We also include demo code and data presented in [1].

Dependencies

Data requirements

Before compiling the code, please, make sure to have all the required data in the following specific folder:

Note that we modified the model files from the 3DMM-CNN paper. Therefore, if you generated these files before, you need to re-create them for this code.

Installation

There are 2 options below to compile our code:

Installation with Docker (recommended)

	docker build -t extreme-3dmm-docker .

Installation without Docker on Linux

The steps below have been tested on Ubuntu Linux only:

  • Install Python2.7
  • Install the required third-party packages:
	sudo apt-get install -y libhdf5-serial-dev libboost-all-dev cmake libosmesa6-dev freeglut3-dev
	wget http://dlib.net/files/dlib-19.6.tar.bz2
	tar xvf dlib-19.6.tar.bz2
	cd dlib-19.6/
	mkdir build
	cd build
	cmake ..
	cmake --build . --config Release
	sudo make install
	cd ..
  • Install PyTorch
  • Install other required third-party Python packages:
	pip install opencv-python torchvision scikit-image cvbase pandas mmdnn dlib
  • Config Dlib and HDF5 path in CMakefiles.txt, if needed
  • Build C++ code
	mkdir build;
	cd build; \
	cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=../demoCode ..;
	make;
	make install;
	cd ..

This code should generate TestBump in demoCode folder

Usage

Start docker container

If you compile our code with Docker, you need to start a Docker container to run our code. You also need to set up a shared folder to transfer input/output data between the host computer and the container.

  • Prepare the shared folder on the host computer. For example, /home/ubuntu/shared
  • Copy input data (if needed) to the shared folder
  • Start container:
	nvidia-docker run --rm -ti --ipc=host --privileged -v /home/ubuntu/shared:/shared extreme-3dmm-docker bash

Now folder /home/ubuntu/shared on your host computer will be mounted to folder /shared inside the container

3D face modeling with realistic details from a set of input images

  • Go into demoCode folder. The demo script can be used from the command line with the following syntax:
$ Usage: python testBatchModel.py <inputList> <outputDir>

where the parameters are the following:

  • <inputList> is a text file containing the paths to each of the input images, one in each line.
  • <outputDir> is the path to the output directory, where ply files are stored.

An example for <inputList> is demoCode/testImages.txt

../data/test/03f245cb652c103e1928b1b27028fadd--smith-glasses-too-faced.jpg
../data/test/20140420_011855_News1-Apr-25.jpg
....

The output 3D models will be <outputDir>/<imageName>_<postfix>.ply with <postfix> = <modelType>_<poseType>. <modelType> can be "foundation", "withBump" (before soft-symmetry),"sparseFull" (soft-symmetry on the sparse mesh), and "final". <poseType> can be "frontal" or "aligned" (based on the estimated pose). The final 3D shape has <postfix> as "final_frontal". You can config the output models in code before compiling.

The PLY files can be displayed using standard off-the-shelf 3D (ply file) visualization software such as MeshLab.

Sample command:

	python testBatchModel.py testImages.txt /shared

Note that our occlusion recovery code is not included in this release.

Demo code and data in our paper

  • Go into demoCode folder. The demo script can be used from the command line with the following syntax:
$ Usage: ./testPaperResults.sh

Before exiting the docker container, remember to save your output data to the shared folder.

Citation

If you find this work useful, please cite our paper [1] with the following bibtex:

@inproceedings{tran2017extreme,
  title={Extreme {3D} Face Reconstruction: Seeing Through Occlusions},
  author={Tran, Anh Tuan and Hassner, Tal and Masi, Iacopo and Paz, Eran and Nirkin, Yuval and Medioni, G\'{e}rard},
  booktitle={IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
  year=2018
}

References

[1] A. Tran, T. Hassner, I. Masi, E. Paz, Y. Nirkin, G. Medioni, "Extreme 3D Face Reconstruction: Seeing Through Occlusions", IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, June 2018

[2] A. Tran, T. Hassner, I. Masi, G. Medioni, "Regressing Robust and Discriminative 3D Morphable Models with a very Deep Neural Network", CVPR 2017

Changelog

  • Dec. 2018, Convert to Dockerfile
  • Dec. 2017, First Release

License and Disclaimer

Please, see the LICENSE here

Contacts

If you have any questions, drop an email to anhttran@usc.edu , hassner@isi.edu and iacopoma@usc.edu or leave a message below with GitHub (log-in is needed).

You can’t perform that action at this time.