Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the final result.obj includes details? #12

Open
Ivy147 opened this issue Sep 6, 2019 · 2 comments
Open

the final result.obj includes details? #12

Ivy147 opened this issue Sep 6, 2019 · 2 comments

Comments

@Ivy147
Copy link

Ivy147 commented Sep 6, 2019

Hi, anpei,thanks for your amazing work!
when I run "python facialDetails.py -i ./samples/details/019615.jpg -o ./results"
I got
image
it seems right,but the result.obj show some problem in meshlab
image
image
1\There is a mismatch between the texture and the model around the nose;
2\The model doesn't have winkles.
3\The model doesn't look like the result in your paper,can you tell me what parameter you use?
Should I use faceRender to overlay the result?Does hmrenderer.exe provides any args to save the obj with winkles?

@apchenstu
Copy link
Owner

apchenstu commented Sep 6, 2019

Hey there,
For the misalignment around nose, we also realize some limitation of this landmark based proxy estimator, we are developing our own proxy estimator, hopefully, we will release them in our future publications.
For the details, as mention in our paper, details was represent as displacement Map, so we recommend using rendering to visualize them, but you can also subdivide the mesh and apply displacement to vertices with texture coordinate.
For the similarity with teaser, we down sample input image to 256 when predict expression parameters in this release version, the teaser used full resolution.

@hangon666
Copy link

Hey there,
For the misalignment around nose, we also realize some limitation of this landmark based proxy estimator, we are developing our own proxy estimator, hopefully, we will release them in our future publications.
For the details, as mention in our paper, details was represent as displacement Map, so we recommend using rendering to visualize them, but you can also subdivide the mesh and apply displacement to vertices with texture coordinate.
For the similarity with teaser, we down sample input image to 256 when predict expression parameters in this release version, the teaser used full resolution.

Hello, thanks for your work, but could you teach me how to subdivide the mesh and apply displacement to vertices with texture coordinate, if I want to save the obj file with details?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants