Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide instruction how to build or download => face_blendshapes.tflite #4162

Closed
GeorgeS2019 opened this issue Mar 10, 2023 · 17 comments
Closed
Assignees
Labels
legacy:face mesh Issues related to Face Mesh type:docs-feature Doc issues for new solution, or clarifications about functionality type:tflite TensorFlow Lite

Comments

@GeorgeS2019
Copy link

Provide instruction how to build or download =>face_blendshapes.tflite

@GeorgeS2019 GeorgeS2019 added the type:others issues not falling in bug, perfromance, support, build and install or feature label Mar 10, 2023
@andrechen
Copy link

andrechen commented Mar 10, 2023

@kuaashish
Copy link
Collaborator

Hi @andrechen, Community contributions are always welcome.
@GeorgeS2019 Thank you for the brining this issue to our notice. We are forwarding issue internally and will let you know the if any specific update will receive on this request. Thank you!

@kuaashish kuaashish added legacy:face mesh Issues related to Face Mesh type:tflite TensorFlow Lite type:docs-feature Doc issues for new solution, or clarifications about functionality and removed type:others issues not falling in bug, perfromance, support, build and install or feature labels Mar 10, 2023
@kuaashish
Copy link
Collaborator

Hi @schmidt-sebastian,
We see that as reported here, Documentation to build/ download for the face_blebdshapes.tflite has been not included in the guide. Could you please look into this issue and can make possible to include this on the documentation. Thank you!

@fadiaburaid
Copy link

fadiaburaid commented Mar 14, 2023

Hi @kuaashish @schmidt-sebastian
Could you also please provide a model card with the details about the training data used? Was it trained on Human data or Synthetic data because from the results I am getting it seems like the model cannot generalize to different face geometries. Using Synthetic data from photo realistic facial rigs could improve results as data can be augmented easily in term of expressions, distance from camera, camera angles, etc.
Thank you

@GeorgeS2019
Copy link
Author

@fadiaburaid
If I understand correctly, the heavy lifting is already done by the original FaceMesh.
The blendshapes simply translate the heavy lifting into an industry format e.g. used by ReadyPlayer.me

@fadiaburaid
Copy link

fadiaburaid commented Mar 14, 2023

@GeorgeS2019
Yes, you are correct. However, if the blendshapes regression model is not trained on enough and different faces, it will not generalize because of differences in face geometry between people. Which is apparently what is happening in the results I am getting.

@GeorgeS2019
Copy link
Author

@fadiaburaid
The dev team could need your feedback.
I am independent.
Could you show e.g. Gif and provide feedback to them?

@SpookyCorgi
Copy link

SpookyCorgi commented Mar 14, 2023

Screen.Recording.2023-03-14.at.3.17.23.AM.mov

Really really excited when this becomes production level.
I quickly tested the model on my own browser tools and use it to pilot some avatars.
I basically took the refined landmarks from face_mesh, scale it by image then input the landmarks into the tflite file using tfjs-tflite.

Working

  • The model seems really sensitive and accurate when moved slowly (testing by personally making faces with webcam).

Not working

  • The output is really jumpy which require a lot of smoothing afterward. Having a confidence output for each blendshape might be helpful in this case.
  • The value is really off with face turning by a little

But again I'm a web dev but not ml dev so I might not be using the model in a optimal way.

@GeorgeS2019
Copy link
Author

@fadiaburaid

scale it by image

This could be relevant !

@GeorgeS2019
Copy link
Author

@andrechen could you please share your experience? Really appreciate.

@GeorgeS2019
Copy link
Author

@SpookyCorgi
The relative position of webcam or android phone ( made possible with mediapipe) needs to be fixed - right in front of the face. No head turning away from the camera.

image

@fadiaburaid
Copy link

fadiaburaid commented Mar 14, 2023

2023-03-14.16-23-49.mp4

I fed the normalized face landmarks from face_mesh to the model and used the output to control the face rig. These are the results from Unity using TF Lite Unity plugin.
The head rotations are calculated from the landmarks directly not the blendshapes. You can see how the face is deformed from the original face at the beginning of the clip. I got the same results using webcam.
I tried to smooth the landmarks before feeding them to the model to reduce the jitter, but errors in blendshapes tend to increase.
I also notice the eyes don't really move and this could be due to the training data used.

@GeorgeS2019
Copy link
Author

GeorgeS2019 commented Mar 14, 2023

@fadiaburaid
Thanks for sharing!
you could see the avatar face provided by @SpookyCorgi is not distorted/deformed

@fadiaburaid
Copy link

2023-03-14.17-31-11.mp4

Yes !!!
I am getting better results after scaling the landmarks by the image size.

@GeorgeS2019
Copy link
Author

Image Size is essential!!!!

image

@jays0606
Copy link

They released their update in the official documentation.
https://developers.google.com/mediapipe/solutions/vision/face_landmarker

@kuaashish kuaashish removed the stat:awaiting googler Waiting for Google Engineer's Response label May 22, 2023
@jeremmoore
Copy link

https://storage.googleapis.com/mediapipe-assets/face_blendshapes.tflite https://storage.googleapis.com/mediapipe-assets/face_blendshapes_generated_graph.pbtxt https://storage.googleapis.com/mediapipe-assets/face_blendshapes_in_landmarks.prototxt https://storage.googleapis.com/mediapipe-assets/face_blendshapes_out.prototxt

haven't tried yet.

I have tested the tflite model using the data from prototxt and found that the inference value is slightly different from the expected value of face_blendshapes_out.prototxt. Meanwhile, the quality of expression driven is worser than the mediapipe python solution, especially the driven of eyes. Does anyone meet the same problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
legacy:face mesh Issues related to Face Mesh type:docs-feature Doc issues for new solution, or clarifications about functionality type:tflite TensorFlow Lite
Projects
None yet
Development

No branches or pull requests

8 participants