Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

basic information about ply file #47

Closed
Adorablepet opened this issue Sep 1, 2020 · 7 comments
Closed

basic information about ply file #47

Adorablepet opened this issue Sep 1, 2020 · 7 comments

Comments

@Adorablepet
Copy link

Adorablepet commented Sep 1, 2020

I created a template with 3Dmax and saved it as an obj file without texture information. Finally, I used a tool convert obj into a ply file. Does the generated ply file meet the input requirements of voca? Thanks.

@yifanh1
Copy link

yifanh1 commented Sep 1, 2020

I downloaded a 3d model and tried it. It turned out to be ValueError: Cannot feed value of shape (357, 1499814, 3, 1) for Tensor 'VOCA/Inputs_decoder/template_placeholder:0', which has shape '(?, 5023, 3, 1)'.

@mbdash
Copy link

mbdash commented Sep 1, 2020

see #43
I have the same issue.
From my understanding, your 3D model must be registered to the FLAME topology.
There is no information on how to register a 3D model to the FLAME topology in this repo or paper.
EDIT: there is some info in section 4.2 of the paper, I am just not smart enough to make anything of it.

The only thing I found was to use Ringnet repo to pass an image and have it generate a 3D new model.
Not really a good solution.

We are in the realm of "programmers" vs "artists".
Classic in 3D :-)

Thx for this issue, it might bring more attention to the problem.

@Adorablepet
Copy link
Author

@mbdash The only thing I found was to use Ringnet repo to pass an image and have it generate a 3D new model. Not really a good solution. Does the model generated in this way meet the requirements of voca?Does this generated new 3D model contain information such as the texture and pose of the input image?Thanks.

@mbdash
Copy link

mbdash commented Sep 2, 2020

I havn't tried RingNet https://github.com/soubhiksanyal/RingNet
since the original texture and UVs would need to be redone and I am not an artist.
And the output would still be vtx animation which is not standard for a 3D engine / game.
(I am not saying it is not possible to use it in an engine such as Unreal, just that is is not standard for games)

I am also looking at this repo https://github.com/leventt/surat
which could instead output blendshape animation with a bit more community contribution.
If the community was to help create a dataset, maybe using ARkit from apple, that could provide enough data for the repo owner and contributor to push the code and models to output morph targets, which is way more 3D engine / game friendly.
The tricky part is to get enough people with access to an apple device with "TrueDepth Camera" (found on devices with chip A12) willing to start recording themselves.

@TimoBolkart
Copy link
Owner

It seems there is some confusion about the mesh input to VOCA. As described in the readme, VOCA animates static templates in FLAME topology. Such templates can be obtained by fitting FLAME to scans, images, or by sampling the FLAME shape space. This means the input mesh must be in semantic correspondence to the FLAME template. Sampling the FLAME shape space for instance can give you an almost infinite amount of templates that can be animated by VOCA.

If you want to get a particular scan into FLAME topology (as we did for the scan on Winston Churchill), you can for instance read Section 4.2 of the FLAME paper how this is done. Long story short, this is an optimization problem where one minimizes the difference between the mesh surface of the FLAME surface and the scan.

We could potentially release VOCA compatible meshes for 1200 subjects obtained by scanning them. I would need to check with the dataset owners though if this is feasible. Would that help anybody?

@mbdash
Copy link

mbdash commented Sep 2, 2020

@TimoBolkart thank you for the information.

For the problem of using a hand modeled 3D mesh, I think what would help the most is any code related on how to accomplish 4.2:

Long story short, this is an optimization problem where one minimizes the difference between the mesh surface of the FLAME surface and the scan.

I think a good solution is to figure out how to solve the

optimization problem where one minimizes the difference between the mesh surface of the FLAME surface and the scan.

But for an arbitrary 3D Model (obj, fbx, ply, etc) made in 3DSmax, Blender or other 3D software instead of a scan mesh.

Some ppl, (I mean myself) aren't smart enough to implement the beautiful equations in the section 4.2.

It would give an approximate FLAME version of the source mesh which should be acceptable.
(if the character is realistic enough)
A 3D artists would need to re-do UVs and maybe fix some textures, but that would be an acceptable process.
Especially if a mesh is "converted" to FLAME topology prior to texturing.

As for releasing the 1200 subject dataset, it might be interesting and give some options but with potential legal / license issues.

@Adorablepet
Copy link
Author

@TimoBolkart I checked that the output of the voca model is an obj file containing only the values ​​of v and f. If I have an obj file (containing only the values ​​of v and f) and then convert it into a ply file, can I do anything about it You can rewrite the obj file containing vf, vn, etc. into the obj file output by the model. At this time, I also have the corresponding mtl file and texture png. Is there any method for reference? Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants