New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What is difference between @mediapipe/tasks-vision and @mediapipe/selfie_segmentation on npm? #4251
Comments
@hongruzhu good question! Google Meet uses a proprietary model to implement real-time selfie segmentation on the input video feed, which unfortunately is not open-sourced. You can refer to this issue to know more. Coming back to your question, both But in general, |
Hello @hongruzhu The You can continue to use those legacy solutions in your applications if you choose. Though, we would request you to check new MediaPipe solutions. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you. |
Closing as stale. Please reopen if you'd like to work on this further. |
@hongruzhu But I replaced the url for deeplab_v3 with the selfie_segmenter model "https://storage.googleapis.com/mediapipe-models/image_segmenter/selfie_segmenter/float16/latest/selfie_segmenter.tflite?v=aljali.mediapipestudio_20230621_1811_RC00" and I got very good results. Now talking about the difference between @mediapipe/tasks-vision and @mediapipe/selfie_segmentation, so after updating the model url as described above there is not much difference in both the models just I observed that the edges were sharp in the taskvision model then the other model/ |
It seems, that In contrast, Haven't checked this yet myself. But it's confirmed here #3659 (comment) |
I am working on a video conference project using JavaScript, and I want to incorporate real-time selfie segmentation on the webcam using MediaPipe npm packages. However, I noticed that there are two different packages available on npm. The MediaPipe website uses the @mediapipe/tasks-vision npm package to demonstrate image segmentation, and provides demo code (available at https://codepen.io/mediapipe-preview/pen/xxJNjbN). However, it seems that the webcam demo is a little laggy and not very smooth. On the other hand, I found another demo (https://github.com/ayushgdev/MediaPipeCodeSamples/blob/main/Vanilla%20JS/selfie%20segmentation%20with%20bg%20blur.html) that uses the @mediapipe/selfie_segmentation npm package to demonstrate background blurring except for the selfie segmentation. This demo is much smoother than the former.
My question is: what is the difference between @mediapipe/tasks-vision and @mediapipe/selfie_segmentation on npm? Is using @mediapipe/selfie_segmentation a better and smoother option compared to using @mediapipe/tasks-vision?
Thank you very much!
The text was updated successfully, but these errors were encountered: