Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DSP graph should run in a different thread #16

Open
sebpiq opened this issue Aug 16, 2013 · 10 comments
Open

DSP graph should run in a different thread #16

sebpiq opened this issue Aug 16, 2013 · 10 comments

Comments

@sebpiq
Copy link
Collaborator

sebpiq commented Aug 16, 2013

One idea is that nodes can be only proxies who send commands to the nodes in the DSP thread.

@sebpiq
Copy link
Collaborator Author

sebpiq commented Sep 10, 2013

this will be hard as node doesn't really support threading nor fork. So objects sent between 2 process are all serialized ; deserialized, which is not acceptable in our case.

@sebpiq
Copy link
Collaborator Author

sebpiq commented Sep 10, 2013

@sebpiq
Copy link
Collaborator Author

sebpiq commented Sep 19, 2013

Try and benchmark solutions for avoiding buffer copies between processes

https://github.com/3rd-Eden/node-memcached
https://github.com/supipd/node-shm
https://github.com/kazupon/node-mappedbuffer

@jussi-kalliokoski
Copy link

This is one of the major shortcomings of the Web Audio API as it is currently. You need something like shared memory and/or locks etc. to actually implement Web Audio API. You can, however, implement most of the important parts without shared memory or running in the main thread. One should hardly ever be transferring big buffers between the main and audio thread. The problematic parts are when you are getting stuff from the audio thread, since you can't make it synchronous without using some native addon.

I hope we can fix the situation sooner rather than later on the spec side.

@sebpiq
Copy link
Collaborator Author

sebpiq commented Sep 20, 2013

Hmm ... I don't really understand why AudioBuffer#getChannelData() causes a problem. Could you explain to me?
Actually, thinking about it, I don't see any operation between audio thread and main thread that needs to be synchronous. Is there something I forgot?

My concern here is more the copy of big buffers between processes, which happens for example when using AudioBufferSourceNode, or when decoding some audio.
And in fact, that's not even a blocking issue, as it can be implemented simply with copies ... which is really, really unefficient, but works (probably).

@srikumarks
Copy link

So objects sent between 2 process are all serialized ; deserialized, which is not acceptable in our case.

In the context of node.js, the inter-process memory bandwidth is unlikely to be a bottleneck. (See https://gist.github.com/srikumarks/6180450) It is, in other words, acceptable to have all the audio rendering happen in another process. This is, btw, the architecture of SuperCollider.

@sebpiq
Copy link
Collaborator Author

sebpiq commented Sep 20, 2013

Yes ... but in SuperCollider buffers are allocated server-side. The client-side instance of buffer is merely a proxy to do a bunch of selected operations. For example you don't have direct access to the buffer data. You can set/get values, but (being a SuperCollider user myself), I never had to do that. Buffer API in SC docs : http://doc.sccode.org/Classes/Buffer.html

Here the size of data I am thinking is not 4096 frames, but more like 13230000 frames (which is the number of frames in a 5mn sound file with 44100Hz sample rate) so it's an other order of magnitude! That's why I think it's not a good idea to copy between processes.

I think I will try putting all the buffers in a shared memory as suggested here https://wiki.mozilla.org/User:Roc/AudioBufferProposal#Implementation_Sketch small problem being that there is no established solution for shared memory in node.js ...

@sebpiq
Copy link
Collaborator Author

sebpiq commented Dec 15, 2014

To me this is the single most important issue before dev can progress on the library. The architecture should be right, and I think it won't be right if audio can't run in a separate thread.

@padenot
Copy link

padenot commented Feb 18, 2015

The right solution here is to make it so that you can run the DSP code in another thread in js. That's how Chrome, Safari and Firefox do it, and the only way to have decent latency. Maybe salvaging https://github.com/audreyt/node-webworker-threads to make it work with an audio thread would be a good start ? Or maybe the goal here is to have no native code ?

When you have threading working in your node process, then you can implement efficient buffer transfers (the way roc outlines it in the document you linked).

Also, getChannelData is not an issue anymore (it was an issue when roc and Jussi were talking, but we fixed it, at implementation and spec level).

@sebpiq
Copy link
Collaborator Author

sebpiq commented Feb 18, 2015

I remember considering this solution, but discarding it ... but I can't remember exactly why! I think there is no support for typed arrays in workers, but not sure about that : audreyt/node-webworker-threads#18

Also, it seems like Node.js clusters have gone a long way since I last looked at it (sept 2013!) and now you can fork ... which means that it would probably be the way to go.
As you say @padenot , first taking out the audio processing from main thread, then optimizing buffer transfers.

Also, I don't mind having native code there, as long as it can be replaced by something else in the browser.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants