Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Output meshes with fine details #1

Closed
YuDeng opened this issue Dec 9, 2020 · 2 comments
Closed

Output meshes with fine details #1

YuDeng opened this issue Dec 9, 2020 · 2 comments

Comments

@YuDeng
Copy link

YuDeng commented Dec 9, 2020

Hi, thanks for making this great work open sourced.

I follow the instruction in the repository and get output meshes of the example images. However I find that they are all coarse shapes with no fine details. Is it possible to include the code for generating the detailed face meshes as well?

Besides, I also have a question about the paper:

The model in this paper has much lower reconstruction error on NoW benchmark compare to previous methods. But the training losses for coarse shapes seem to be commonly used for previous works (except the new shape consistency loss). I wonder which loss contributes most to the reconstruction quality. What's more, what do you think is the most important part to obtain accurate reconstruction results in your paper?

Really appreciated if you can answer the question :)

@TimoBolkart
Copy link
Collaborator

Hi Yu,

thank you for your interest in DECA.

Regarding the detail mesh, thank you for the suggestion. We just updated the code to also output the reconstructed mesh with applied displacements. Please pull and try again. To leverage DECA's animatable details, we recommend to follow the provided animation pipeline (see Section 4 of the paper for more information about DECA's animatable details).

Regarding the questions about the paper, there are several factors contributing to the qualitative and quantitative state-of-the-art performance of DECA, namely

  1. A novel shape consistency loss (Equation 8) that encourages the shape of different images of the same subject to be the same.
  2. Unlike most previous 3D face reconstruction methods, we use FLAME instead of Basel Face Model (BFM) to represent the coarse shape. FLAME's identity space is more expressive than BFM as demonstrated in Section 7.3 of the original FLAME paper.
  3. The coarse DECA shape is trained from a large dataset of 2 Million images with a wide coverage of ethnicities (see Appendix A). This makes DECA robust to a large variation of ethnicities, face shapes, head poses, lighting conditions, etc. This is important for a high reconstruction quality.
  4. The training data are automatically cleaned to avoid poor landmark labels (see Appendix A). We will release the landmark labels for the training data together with the training code in the future.
  5. The different losses are carefully combined and weighted (see Appendix A). All losses, i.e. landmark loss (Eq. 5), photometric loss, identity loss (Eq. 7), and the shape consistency loss (Eq. 8) contribute to the quality and robustness of the reconstruction.
  6. It is also worth mentioning the eye closure loss (Eq. 6). While it does not directly influence the reconstruction error on NoW [Sanyal et al. 2019], it makes the reconstructed eye region visually more appealing.

I hope that answers your questions

@YuDeng
Copy link
Author

YuDeng commented Dec 11, 2020

Thanks for your detailed explanation. It really helps.

@YuDeng YuDeng closed this as completed Dec 11, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants