Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(OfflineAudioContext): Offline Audio Context #222

Closed
olivierthereaux opened this issue Sep 11, 2013 · 3 comments
Closed

(OfflineAudioContext): Offline Audio Context #222

olivierthereaux opened this issue Sep 11, 2013 · 3 comments

Comments

@olivierthereaux
Copy link
Contributor

Originally reported on W3C Bugzilla ISSUE-17389 Tue, 05 Jun 2012 12:10:06 GMT
Reported by Michael[tm] Smith
Assigned to

Audio-ISSUE-11 (OfflineAudioContext): Offline Audio Context [Audio Processing API]

http://www.w3.org/2011/audio/track/issues/11

Raised by: Alistair MacDonald
On product: Audio Processing API

Per Nyblom suggests a silent audio graph that can be used to render audio, allowing for faster than real-time output, useful for mixing down a graph to a wave file.

http://lists.w3.org/Archives/Public/public-audio/2012AprJun/0024.html

This is in line with UC 3: online music production tool
http://www.w3.org/2011/audio/wiki/Use_Cases_and_Requirements#UC_3:_online_music_production_tool

IE: a user has an 8 track DAW in the web browser and wants to export the mix to a single file for download.

James Wei points to some example code on this, the feature is already being worked on.
http://lists.w3.org/Archives/Public/public-audio/2012AprJun/0028.html

Chris R. points out that this is an "offline audio context" and that the code being worked on is subject to change:
http://lists.w3.org/Archives/Public/public-audio/2012AprJun/0029.html

@olivierthereaux
Copy link
Contributor Author

Original comment by Srikumar Subramanian (Kumar) on W3C Bugzilla. Wed, 17 Oct 2012 11:47:25 GMT

The current webkit implementation of an "offline audio context" is by overloading the AudioContext constructor to accept 3 arguments - numberOfChannels, numberOfFrames and sampleRate. The AudioContext interface also has an event listener field called 'oncomplete' to indicate completion. Should this be formalized in the spec?

One consequence of offline contexts would be in the "4.1.3. Lifetime" section of the draft spec which currently reads "Once created, an AudioContext will not be garbage collected. It will live until the document goes away." Offline contexts will need to be GC-ed.

An argument against offline contexts is that mix downs can be implemented in pure JS anyway, but then the rendering pipeline for such a mix down will have to reimplement all the builtin nodes in JS, which is wasteful.

From an API perspective, it will be better to indicate that a context will be offline in a more explicit manner than the constructor overloading approach in the current webkit prototype.

If offline contexts are included, one likely additional requirement would be the ability to schedule JS callbacks when the context reaches certain times. This will be needed to setup dynamic parts of the render graph just-in-time so the memory requirement of the render graph does not grow in proportion to the duration of the offline context. One way to work-around the absence of such an explicit callback mechanism is to introduce a dummy JS node whose purpose is to trigger such callbacks. For this work-around to actually work, native nodes have to stall for JS nodes, which should not happen in the realtime case if I understand correctly. Such behaviour differences will then have to be documented in the spec.

@olivierthereaux
Copy link
Contributor Author

Original comment by Srikumar Subramanian (Kumar) on W3C Bugzilla. Wed, 17 Oct 2012 12:28:29 GMT

I just realized that from a resource perspective, it might be much better for an offline audio context to provide periodic JS callbacks with buffers of a fixed duration rather than provide the whole render via a single oncomplete callback - sort of like a JS audio destination node. This will let us stream the rendered audio to a file using the local file system API instead of holding it all in memory, or send it to a WebRTC encode+transmit pipe.

(I confess I haven't used the current prototype in webkit and therefore may have some misunderstandings.)

@olivierthereaux
Copy link
Contributor Author

Original comment by Olivier Thereaux on W3C Bugzilla. Tue, 02 Apr 2013 15:03:07 GMT

Closing this as the OfflineAudioContext has been added to the spec and individual issues about it are being tracked independently.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant