Skip to content

waps101/AlbedoMM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 

Repository files navigation

William A. P. Smith 1, Alassane Seck 2,3, Hannah Dee 3, Bernard Tiddeman 3, Joshua Tenenbaum 4 and Bernhard Egger 4
1 University of York, UK
2 ARM Ltd, UK
3 Aberystwyth University, UK
4 MIT, USA

[CVPR2020]


Abstract

In this paper, we bring together two divergent strands of research: photometric face capture and statistical 3D face appearance modelling. We propose a novel lightstage capture and processing pipeline for acquiring ear-to-ear, truly intrinsic diffuse and specular albedo maps that fully factor out the effects of illumination, camera and geometry. Using this pipeline, we capture a dataset of 50 scans and combine them with the only existing publicly available albedo dataset (3DRFE) of 23 scans. This allows us to build the first morphable face albedo model. We believe this is the first statistical analysis of the variability of facial specular albedo maps. This model can be used as a plug in replacement for the texture model of the Basel Face Model and we make our new albedo model publicly available. We ensure careful spectral calibration such that our model is built in a linear sRGB space, suitable for inverse rendering of images taken by typical cameras. We demonstrate our model in a state of the art analysis-by-synthesis 3DMM fitting pipeline, are the first to integrate specular map estimation and outperform the Basel Face Model in albedo reconstruction.

Oral CVPR 2020 presentation

Oral CVPR 2020 presentation

Scala code for loading, visualising and fitting the model

We make available Scala code for loading the statistical model, visualising its principal components and fitting to an image in an inverse rendering pipeline. This code also enables to combine the albedo Model with the Basel Face Model to built a joint model file.

Matlab code for sampling and Poisson blending textures

In our capture pipeline, we acquire three photometric views of the head and a mesh to which we fit template geometry. We have developed Matlab code for sampling and blending the different views into a seamless per-vertex texture. We also make available a matlab implementation of per-vertex ambient occlusion.

Loading the model in Matlab

If you wish to use the model in matlab, download the h5 file in the release folder and use the following code:

    texMU = h5read('albedoModel2020_bfm_albedoPart.h5','/diffuseAlbedo/model/mean')';
    texPC = h5read('albedoModel2020_bfm_albedoPart.h5','/diffuseAlbedo/model/pcaBasis')';
    texEV = h5read('albedoModel2020_bfm_albedoPart.h5','/diffuseAlbedo/model/pcaVariance')';

FLAME topology model

We also make a version of our model available in the topology of the FLAME model. See the official release which contains a compressed numpy file containing the mean, principal components and variances for the diffuse and specular models. Big thanks to Timo Bolkart for registering our meshes with the FLAME model.

Raw data

Some of the participants gave additional permission for their scans to be distributed. We will provide the raw data captured in our scanner, multiview stereo models and the final registered, processed albedo maps on the template mesh. We hope to make this available very soon.

License

We give permission to use the model and code only for academic research purposes. Anyone wishing to use the model or code for commercial purposes should contact William Smith in the first instance.

Citation

If you use the model or the code in your research, please cite the following paper:

William A. P. Smith, Alassane Seck, Hannah Dee, Bernard Tiddeman, Joshua Tenenbaum and Bernhard Egger. "A Morphable Face Albedo Model". In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. https://arxiv.org/abs/2004.02711

Bibtex:

@inproceedings{smith2020morphable,
  title={A Morphable Face Albedo Model},
  author={Smith, William A. P. and Seck, Alassane and Dee, Hannah and Tiddeman, Bernard and Tenenbaum, Joshua and Egger, Bernhard},
  booktitle={Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  pages={5011--5020},
  year={2020}
}

In addition, if you use the model, you should cite the following paper since the model is partly derived from the data in the 3DRFE dataset:

Stratou, Giota, Abhijeet Ghosh, Paul Debevec, and Louis-Philippe Morency. "Effect of illumination on automatic expression recognition: a novel 3D relightable facial database." In Proc. Face and Gesture 2011, pp. 611-618. 2011.

Bibtex:

@inproceedings{stratou2011effect,
  title={Effect of illumination on automatic expression recognition: a novel {3D} relightable facial database},
  author={Stratou, Giota and Ghosh, Abhijeet and Debevec, Paul and Morency, Louis-Philippe},
  booktitle={Proc. International Conference on Automatic Face and Gesture Recognition},
  pages={611--618},
  year={2011}
}