Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mediapipe facemesh tfjs #36

Closed
gauravgola96 opened this issue Sep 25, 2020 · 12 comments
Closed

Mediapipe facemesh tfjs #36

gauravgola96 opened this issue Sep 25, 2020 · 12 comments

Comments

@gauravgola96
Copy link

Any plans for optimization for mobile browsers. Or tfjs model provided in the repo will work.

@PINTO0309
Copy link
Owner

@gauravgola96
Copy link
Author

gauravgola96 commented Sep 26, 2020

I tried it for mobile browser for medium/low level devices with decent gpu but got only around 5-6Fps (Webgl backend) . Can you tell what optimization you did in this tfjs model and tfjs facemesh has recently updated with iris support which droped its performance by 5-7fps also.

@PINTO0309
Copy link
Owner

I don't know what kind of mobile device you're using, but I ran it on Google Chrome on my Pixel 4a and it performed at around 10FPS. I think it's a GPU performance issue.
ezgif com-video-to-gif (1)

@gauravgola96
Copy link
Author

I am testing on https://www.devicespecifications.com/en/model/8d2f4cea
Getting 5fps.

@PINTO0309
Copy link
Owner

Hmmm. There doesn't seem to be any significant difference in the performance of your device and mine. Have you tried the following demo?
https://terryky.github.io/tfjs_webgl_app/facemesh

@gauravgola96
Copy link
Author

Yes, i tried this only. Getting 5FPS.

@PINTO0309
Copy link
Owner

I generated and committed a TFJS model of Float16, hoping that the GPU would be used effectively.
https://github.com/PINTO0309/PINTO_model_zoo/tree/master/032_FaceMesh/08_tfjs

@terryky Does your FaceMesh example program use the Float32 model? Have you tried the Float16 model and have you ever tried it? I don't know if it will improve my performance.

@gauravgola96
Copy link
Author

From the network calls, it looks like this demo https://terryky.github.io/tfjs_webgl_app/facemesh is using https://storage.googleapis.com/tfhub-tfjs-modules/mediapipe/tfjs-model/facemesh/1/default/1/model.json
It is not using your quantized model.

@terryky
Copy link

terryky commented Sep 28, 2020

Yes, the facemesh sample app simply uses mediapipe original tfjs model.
By default, it runs tfjs with webgl-backend. The performance may increase if it use wasm-backend. (depends on devices)

@gauravgola96
Have you tried wasm-backend instead of webgl-backend ?
You can use the wasm backend just by enabling the following line:
https://github.com/terryky/tfjs_webgl_app/blob/fc404c39ba9a6f834f18a40f546564e94b8fbc69/facemesh/webgl_main.js#L5

@gauravgola96
Copy link
Author

gauravgola96 commented Sep 28, 2020

@terryky @PINTO0309 Since you have used the original mediapipe tfjs model I tested https://storage.googleapis.com/tfjs-models/demos/facemesh/index.html which is official demo with predicted iris off.
For webgl backend : 5-6 FPS
wasm backend : 4-5 FPS

Can I use the quantized model (float 16 ) version in your demo project somehow?
Also, do I have to also use Blazeface quantized model while using the quantized face mesh model?

@gauravgola96
Copy link
Author

However, tried to load your quantized model in facemesh but got this error.

image (5)

@terryky
Copy link

terryky commented Sep 28, 2020

I suspect that using the fp16 model will not improve performance because I have tried fp16 model in the tensorflow lite environment but I did not see distinguish performance improvement.

tflite port is here:
https://github.com/terryky/tflite_gles_app/tree/master/gl2facemesh

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants