-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extract expression parameters from images #1
Comments
You can either fit the model to an image similar to the 3D landmark fitting example, but optimizing for the difference between predicted 2D landmarks and the projected 3D landmarks (i.e. as described in the FLAME paper), or use a pre-trained regression model like RingNet, which directly outputs FLAME model parameters for an image. |
It seems RingNet targets another version of FLAME model since it outputs 100 shape parameters and 50 expression parameters (RingNet/config_test.py):
It is possible to optimize TF_FLAME parameters (300/100) given RingNet vertices output, but it doesn't look like the best way to fit TF_FLAME to image. |
RingNet does not use another version of FLAME, just only a subset of the parameter. The shape parameters RingNet returns are the first 100 shape parameters of FLAME, the expression parameters are the first 50 expression parameters. Recently, a demo got added to this repo to fit FLAME to sparse 2D keypoints.
Fitting 2D image keypoints only provides a very sparse signal about the facial shape and expression. Using more parameters for optimization than landmark constraints requires solving an underconstrained optimization problem. The more parameters are optimized for, the more carefully the regularization must be set. |
All good, this answer explains the mismatch. |
Wow, the fastest answer ever =). |
Hi, thanks a lot for the amazing work. I am wondering if I can extract expression parameters from images, so that I can get supervision of 3D
The text was updated successfully, but these errors were encountered: