You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In following example, canvas.getContext('2d').drawImage(imgData, ...) is way inefficient than canvas.getContext('2d').drawImage(videoElement, ...).
If video and canvas is gpu accelerated, canvas.getContext('2d').drawImage(videoElement, ...) can copy gpu texture to gpu texture. but canvas.getContext('2d').drawImage(imgData, ...) requires to read back gpu texture to cpu memory and then upload cpu memory to gpu texture.
Currently Mediastream Image Capture spec forces gpu based video to read back to system memory. It hits performance very badly.
First of all, FrameGrabEvent event inherently return software RGBA memory block. It should return handle to point out video frame.
In following example,
canvas.getContext('2d').drawImage(imgData, ...)
is way inefficient thancanvas.getContext('2d').drawImage(videoElement, ...)
.If video and canvas is gpu accelerated,
canvas.getContext('2d').drawImage(videoElement, ...)
can copy gpu texture to gpu texture. butcanvas.getContext('2d').drawImage(imgData, ...)
requires to read back gpu texture to cpu memory and then upload cpu memory to gpu texture.in the same sense, it's very inefficient to upload the captured frame to WebGL or WebCL.
The text was updated successfully, but these errors were encountered: