-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
using audify with the web audio api via electron #17
Comments
In case anyone comes across this just wanted to post the solution I settled on, the best way to get Web Audio raw output buffers in realtime seems to be via an If anyone's more curious I can post some sample code. Or if anyone has any feedback/ideas for better implementation I'm open to hearing. |
@marcelblum - if possible, could you please provide a sample code? |
Sure. There are a few gotchas bc of Electron, Chromium, and Web Audio API quirks but I have it working pretty well now. Was thinking of publishing a package for this if there's demand since there's not much info on the web about how to do this. On Windows I have it working well with Electron 12+. On Mac, you'll need to use Electron 14+ for it to work well because of Chromium Mac-specific issues with In the main process, you'll need to instantiate your In the renderer process, the best way I found to do this is to include your Also, note that in So a basic example has renderer code that looks something like this (assumes you have already created an var RtNode;
audioContext.audioWorklet.addModule(URL.createObjectURL(new Blob([`
const RtASIOInstance = new (require(${JSON.stringify(require.resolve('audify'))}).RtAudio)(7);
const defaultSamplesPerFrame = 128;
const samplesPerFrame2x = defaultSamplesPerFrame * 2;
RtASIOInstance.openStream(
{ deviceId: ${yourChosenDeviceID}, nChannels: 2, firstChannel: ${yourChosen1stChannel} },
undefined,
16, //float32 is the native format web audio delivers buffers to process()
${audioContext.sampleRate},
defaultSamplesPerFrame,
"MyStream",
undefined,
() => {
console.log("played 1st buffer successfully");
RtASIOInstance.setFrameOutputCallback();
},
1, //non-interleaved
(error, message) => console.warn(error, message)
);
RtASIOInstance.start();
class RtRouter extends AudioWorkletProcessor {
constructor () {
//might want to do stuff here like initialize messaging between the worker and renderer process
//super();
//this.port.onmessage = ...
}
process (inputs) {
if (inputs[0]?.length > 1) { //inputs will be empty if no audio node is connected or something goes wrong in the renderer
const bufferConcatenation = new Float32Array(samplesPerFrame2x);
bufferConcatenation.set(inputs[0][0]); //left channel data
bufferConcatenation.set(inputs[0][1], defaultSamplesPerFrame); //right channel data
RtASIOInstance.write(bufferConcatenation);
}
return true;
}
}
registerProcessor("deviceRouter", RtRouter);
`], { type: "application/javascript; charset=utf-8" }))).then(() => {
RtNode = new AudioWorkletNode(audioContext, "deviceRouter");
RtNode.onprocessorerror = (e) => console.warn("error from RtNode", e);
//now you can do anyWebAudioNode.connect(RtNode);
}).catch((reason) => console.warn("Rt device routing failed", reason)); Obviously this is a simple example, in real world use you'll likely want to handle messaging with the worklet, run |
HI, @marcelblum Thank you very much for the code snippet. Much Appreciated. I will reach out to you again if I am stuck at any place. |
Audify includes AFAICT the only currently maintained and fully working Node implementation of RtAudio, giving it quite a bit of utility for routing audio - awesome! I am looking for a way to pipe audio rendered in realtime by the Web Audio api into an instance of Audify's RtAudio in an Electron app. My goal is to be able to output audio generated by the browser (e.g. Electron's renderer process) to an ASIO-only device on Windows. Does anyone have any idea on approaches for this? I was thinking perhaps through a WebRTC
MediaStream
(AudioContext.createMediaStreamDestination()
) but I am not sure if this is the optimal route. Any ideas/pointers/nudges in the right direction would be appreciated!The text was updated successfully, but these errors were encountered: