-
Notifications
You must be signed in to change notification settings - Fork 2k
Description
System information
- TensorFlow.js version (you are using): 3.11.0
- Are you willing to contribute it (Yes/No): Yes
Describe the feature and the current behavior/state.
The latest mediapipe facemesh model supports asymmetric facial expression and has a stunning performance on tracking the mouth area. Here's a quick demo as of Jan 10 2022: https://codepen.io/mediapipe/full/KKgVaPJ (turn on the refine landmarks option). This is a huge improvement over the current pretrained model on tfjs, which can't track the facial expression as lively: https://storage.googleapis.com/tfjs-models/demos/face-landmarks-detection/index.html?model=mediapipe_facemesh. I notice that tfjs landmark detection has been more than 1 year old on npm. Since tfjs landmark detection is using mediapipe facemesh underneath (constructor API attached below), would be great if we can update to the latest model:
const model = await faceLandmarksDetection.load(
faceLandmarksDetection.SupportedPackages.mediapipeFacemesh)
Will this change the current api? How?
No. It should be a swap of the underlying model to the latest one.
Who will benefit with this feature?
Everyone who uses the facemesh module with tfjs. As shown in the link above, the model performance is visually much much better with the latest mediapipe facemesh release. Yet the inference time is almost identical.