New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use <video> as display sink via canvas.captureStream #521
Comments
Merging in an audio track via a MediaStreamAudioDestinationNode seems to work in Firefox, Chrome/dev-Edge, and iOS Safari but doesn't play the audio on desktop Safari. (So amusingly, currently desktop Safari gets video but no audio, and iOS Safari gets audio but no video. Put two devices together and you're set. ;) ) It's probably another bug in WebKit, will make a simpler test case and report upstream. |
Mac sound issue imported to radar as rdar://problem/51290997 |
There's a provisional test mode for this, has to be enabled explicitly. Test as " |
To an appreciable degree |
This should provide a rough draft in code of the concept described in this issue https://next.plnkr.co/edit/vQjbBo |
I'm not sure this would apply; in this case the |
Is this issue trying to solve the case of getting and displaying current time and seeking during a live If that is the case one workaround would be to delay the output by N seconds, pipe input media through If that is not what this issue is attempting to resolve kindly advise. |
The concept of this code https://plnkr.co/edit/Axkb8s?p=info is to create an infinite live |
I had enough issues with this mode having underlying platform bugs in the browser, so I'm taking it out. ;_; Was good to experiment with though! |
In addition to MSE-based experiments, I'm experimenting with a WebRTC-style MediaStream as a medium for getting video frames and audio sample into a real native
<video>
element. This uses the<canvas>
'scaptureStream()
method to obtain a suitable MediaStream of frames from the canvas's backing buffer, piped into the video'ssrcObject
.This appears as a live stream to the
<video>
, so ogv.js is still responsible for A/V sync etc; when drawing a frame we callrequestFrame()
on the stream (or track, depending on browser) to push the current frame onto the video stream, so it doesn't have to sample at a fixed rate.Unfortunately while this mirrors cleanly and without visible delay, the browser default controls reflect that it's a live stream, so won't show the expected seek bar, duration/current-time etc. However it seems possible to monkeypatch the HTMLVideoElement object to pass-through to ogv.js's internals, which should allow custom controls to work even if the
<video>
is passed directly as the primary element (instead of sitting inside the<ogvjs>
). Need to decide on best way to do this.For audio, it should be possible to combine with an audio track obtained from the AudioContext, creating a stream with both video and audio tracks. Audio is untested so far.
Note that all this works in:
and fails in:
captureStream()
on<canvas>
. Will be obsoleted sooner or later by the Chromium-based Edge, so not a big deal.The text was updated successfully, but these errors were encountered: