Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Get raw data from MediaStream #327

Closed
piedshag opened this issue Mar 14, 2016 · 10 comments
Closed

Get raw data from MediaStream #327

piedshag opened this issue Mar 14, 2016 · 10 comments
Labels

Comments

@piedshag
Copy link

Please excuse me if this is the wrong place to propose this. It would be really useful to have access to the raw data coming out of the MediaStream so it could be processed before being shipped off to a remote peer or displayed to the user. The data could perhaps be emitted in an ondata event which would emit raw data similar to that of the MediaRecorder. Would anyone else find something along these lines useful?

@alvestrand
Copy link
Contributor

There's no such thing as "the raw data". There are many different possible internal representations, and we do not constrain which of them browsers use.
If you want the bitmap, paint it to a canvas via a video element.
If you want a compressed media format, use the MediaStreamRecorder.
I think we have the interfaces we need. What scenario isn't covered?

@piedshag
Copy link
Author

I believe the Media recorder only gives you the media when the stream has finished or you end the recorder. I would like the media as it is produced and as far as I am aware that is not possible with the Media recorder.

@alvestrand
Copy link
Contributor

See the "requestData" method in https://rawgit.com/w3c/mediacapture-record/master/MediaRecorder.html#methods - you can get the so-far-available data at any time.

@piedshag
Copy link
Author

Thankyou! I should have done some proper research

@alvestrand
Copy link
Contributor

Seems that no action is needed. Closing.

@Pehrsons
Copy link
Contributor

There's also https://w3c.github.io/mediacapture-worker/ being drafted.

@embirico
Copy link

Hey @alvestrand, sorry to resuscitate an old thread, but I currently have a use case for getting the raw data, and a scenario that doesn't seem to be currently covered:

I have an application with fairly long-lived streams (e.g. 1 hour long at a time), and I'd like to provide users with a way to capture short clips of the most recent, say, 15 seconds.

Currently it seems that creating a buffer of the most recent data directly from the MediaStream is not supported.
Also, it seems that keeping a buffer of the most recent blobs from MediaRecorder doesn't work, due to the spec stating that

When multiple Blobs are returned (because of timeslice or requestData()), the individual Blobs need not be playable, but the combination of all the Blobs from a completed recording MUST be playable.

What do you think of supporting this scenario, or is there an interface I'm missing? Thank you!

@bc
Copy link

bc commented May 25, 2020

@embirico did you end up finding a solution? I have a similar use case

@aboba
Copy link
Contributor

aboba commented May 25, 2020

@bc @embirico As Harald has noted, access to raw video is already supported. Therefore the focus for new APIs is on scenarios requiring higher performance than is currently achievable via Canvas or MediaRecorder. We have the following WebRTC-NV use cases for access to raw video (raw audio access is already provided via WebAudio Worklets):
Funny Hats
Machine Learning

So far, there are two specifications under development which could conceivably address these use cases:
Insertable Streams
WebCodecs

In the current Origin Trial, Insertable Streams provides access to encoded video, but only a minor API change would be required to add an insertion point prior to frame encoding (sender) or after frame decoding (receiver). Since raw video is much larger than encoded video, it is not clear that the existing Insertable Streams API would provided sufficient performance for the Machine Learning use case, in particular.

WebCodecs is still early in its incubation, but it may provide access to encoded bitstreams and raw (decoded) video.

@BlobTheKat
Copy link

If you want the bitmap, paint it to a canvas via a video element. If you want a compressed media format, use the MediaStreamRecorder. I think we have the interfaces we need. What scenario isn't covered?

Capturing microphone data to perform some processing and/or upload it over network. For some applications implementing WebRTC is more trouble than it's worth and reusing an existing http/websocket interface makes the most sense. The only way to obtain the raw data currently is with a script processor node on an AudioContext which can capture input data. This definitely feels like the wrong way to do it. MediaRecorder is not sufficient as the different supported output formats by different browsers are too restrictive to be useful in most cases

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

7 participants