New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reuse video texture for mask mixing #5291
Comments
Hi @danrossi, Could you kindly provide the detailed steps you are following from our documentation, or alternatively, share the standalone code with us? This will help us to reproduce the issue and gain a better understanding of it. Thank you!! |
I'll find a vanilla code. Its actually the webgl post render on the same context. Hence binding the mask texture. Would be good somehow to apply the video texture the same to not have to reupload a video frame. It's already efficient enough but trying to improve it further. |
Hi @danrossi, We apologize for any confusion. Could you please confirm whether you are utilizing our new Task API as outlined in the documentation, or if you are still using our legacy Selfi Segmentation from here? Please note that support for legacy solutions has been discontinued, and they are no longer maintained. Our current focus is on enhancing our new Task APIs, so new features will only be added to those. Thank you!! |
I was tied up with massive VR upgrades to fix issues with my Safari hacks and apple vision I am sorry. I have put up a working test file. requesting media device isnt possible in jsfiddle. You can see this bit I have to reupload the video frame even from within the same context. Hopefully rebinding the internal video texture on the offscreen canvas context from mediapipe is enough to render video to be mixed ? Im assuming its scaled down so can be rescaled ? rendering in webgl is performing ok enough Ive done performance tests. Just that one last bit I am hoping for. Like applying the mask texture from mediapipe.
https://electroteque.org/dev/mediapipe/ Example it working although I bundle it in, so haven't made an update for a while. I'll double check. https://electroteque.org/plugins/videojs/rtcstreaming/demos/virtual-background/ |
MediaPipe Solution (you are using)
selfie segmentation
Programming language
Javascript
Are you willing to contribute it
None
Describe the feature and the current behaviour/state
Ability to apply a video texture from the same webgl context instead of reuploading a video texture. Like applying the mask texture.
Will this change the current API? How?
No response
Who will benefit with this feature?
No response
Please specify the use cases for this feature
WebGL mixing of video, mask and background image
Any Other info
I currently use this in my render to mix elements with a custom shader. However it might be more efficient to reapply the already added video texture in media pipe than reuploading a video frame. I already apply the mask texture from the segmentation result. And an image texture is already added to the program and doesnt need to be reuploaded. It's video that is being reuploaded a second time after being sent to mediapipe.
Ideally a premixed output using the calculator already available to do such things. Doesn't require post processing in webgl.
So ability to bind the already added video texture in the gl context of mediapipe like it's done with the mask
The processing is done on the same offscreen canvas context as mediapipe is using.
The text was updated successfully, but these errors were encountered: