You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Extracting this discussion from #4 (comment), since this was not really fully discussed there.
The use cases for MediaStreamTrackGenerator for audio are unclear given its functionality largely overlaps with what WebAudio can do, WebAudio being already largely deployed in all major browsers.
The text was updated successfully, but these errors were encountered:
MediaStreamTrackGenerator can be implemented by using AudioWorklet+MediaStreamAudioDestinationNode.
We should document the use-cases where a native MediaStreamTrackGenerator API is superior to the above approach.
We should also document the use-cases where AudioWorklet is superior to MediaStreamTrackGenerator.
One benefit of AudioWorklet is that the application has full control of cases like: 'there is no more buffered data, what should I do?'. Applications may decide to reduce volume, fake some sound...
With MediaStreamTrackGenerator, the application has no way to react on this case.
Its sole approach is to pile enough data that the audio renderer will never get in that 'no more buffered data' case.
This is especially difficult when processing is done in main thread or when application is planned to be run on very diverse devices.
I think we can treat this as a duplicate of issue #29 since the issue would be audio support in general, so closing.
Please reopen if you think audio support for generator/processor should be treated separately.
Extracting this discussion from #4 (comment), since this was not really fully discussed there.
The use cases for MediaStreamTrackGenerator for audio are unclear given its functionality largely overlaps with what WebAudio can do, WebAudio being already largely deployed in all major browsers.
The text was updated successfully, but these errors were encountered: