Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to eliminate head jitter in data preprocessing? #28

Closed
fastcode3d opened this issue Jan 13, 2020 · 6 comments
Closed

How to eliminate head jitter in data preprocessing? #28

fastcode3d opened this issue Jan 13, 2020 · 6 comments

Comments

@fastcode3d
Copy link

Thank you very much for your work.
I downloaded Registered Data, Unposed Data, and Unposed Cleaned Data.
What are the operations in the data preprocessing stage?
How to eliminate the jitter of the head and finally get Unposed Cleaned Data?

@TimoBolkart
Copy link
Owner

We use a method similar to the sequential registration method described in the FLAME paper to register the raw 3D head scans. The resulting data are denoted as Registered Data.
We then unpose all registered meshes. This is done by fitting FLAME to each mesh, followed by removing effects of global rotation, translation, and head rotation around the neck. These data are denoted as Unposed Data.
Finally, to get the Unposed Cleaned Data, we fix for each sequence (within the Unposed Data) the neck boundary vertices and the ear vertices to a fixed locatio, and apply to the region around the eyes a Gaussian filtering (across the sequence) to mitigate capture noise.
Does this answer your question?

@fastcode3d
Copy link
Author

Yes, thank you very much.

@SaltedSlark
Copy link

@TimoBolkart Hello, I want upose my custom mesh produced by MICA reconstrucrion, I want to konw how to remove effects of global rotation, translation, and head rotation around the neck.

@TimoBolkart
Copy link
Owner

Hello, you will need to get the FLAME parameters (i.e., translation, rotation, pose, identity, and expression parameters) for your sequence. If one has only meshes in FLAME mesh topology given, one can for instance get this with the fit_3D_mesh script. However, if you get the FLAME meshes with some FLAME tracker, e.g., from monocular videos, it is best to adapt this tracker to directly output the FLAME parameters per frame.

To unpose the data, you only need to zero out the parameters for translation, global rotation, and neck rotation, which are the first six parameters of the pose vector, and feed these changed parameters into the FLAME model (i.e., running the forward pass of the model with the changed parameters) and output the resulting mesh.

@SaltedSlark
Copy link

@TimoBolkart Thanks so much for your reply, I will try it.

@SaltedSlark
Copy link

@TimoBolkart Hello bro, when I export the predicted mesh into ply file(vertice is predicted by the model and faces is from FLAME_sample.ply ) and I open it in meshlab, I got this:
1722923123103_BE2F838D-202C-45c8-B422-39DF5E734C21
What's wrong?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants