Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

using audify with the web audio api via electron #17

Closed
marcelblum opened this issue Jul 29, 2021 · 4 comments
Closed

using audify with the web audio api via electron #17

marcelblum opened this issue Jul 29, 2021 · 4 comments

Comments

@marcelblum
Copy link
Contributor

marcelblum commented Jul 29, 2021

Audify includes AFAICT the only currently maintained and fully working Node implementation of RtAudio, giving it quite a bit of utility for routing audio - awesome! I am looking for a way to pipe audio rendered in realtime by the Web Audio api into an instance of Audify's RtAudio in an Electron app. My goal is to be able to output audio generated by the browser (e.g. Electron's renderer process) to an ASIO-only device on Windows. Does anyone have any idea on approaches for this? I was thinking perhaps through a WebRTC MediaStream (AudioContext.createMediaStreamDestination()) but I am not sure if this is the optimal route. Any ideas/pointers/nudges in the right direction would be appreciated!

@marcelblum
Copy link
Contributor Author

In case anyone comes across this just wanted to post the solution I settled on, the best way to get Web Audio raw output buffers in realtime seems to be via an audioWorklet, which is actually a pretty simple method and no need to mess with WebRTC streams at all. So I have the worklet itself instantiate RtAudio from audify and pipe the raw input PCM data directly to rtAudio.write() (in Electron worklets can call Node.js modules as long as nodeIntegrationInWorker is true). I connect it to a Web Audio node back in the main thread (in my case I connect my main master output node to the worklet), and the audio is routed to the RtAudio stream nicely, tested up to 32-bit 96kHz output, no glitching. Only issues I came across is that some ASIO devices I tested did not seem to work with in RtAudio. I may open a separate issue about that to investigate further if I can't figure it out.

If anyone's more curious I can post some sample code. Or if anyone has any feedback/ideas for better implementation I'm open to hearing.

@Durgaprasad-Budhwani
Copy link

@marcelblum - if possible, could you please provide a sample code?

@marcelblum
Copy link
Contributor Author

marcelblum commented Oct 7, 2021

Sure. There are a few gotchas bc of Electron, Chromium, and Web Audio API quirks but I have it working pretty well now. Was thinking of publishing a package for this if there's demand since there's not much info on the web about how to do this.

On Windows I have it working well with Electron 12+. On Mac, you'll need to use Electron 14+ for it to work well because of Chromium Mac-specific issues with audioWorklet in earlier versions. Note also the warning in the Electron docs about using native addons in workers, but I have not run into any of the problems mentioned there while using Audify.

In the main process, you'll need to instantiate your BrowserWindow with a webPreferences options object that includes { nodeIntegrationInWorker: true, nodeIntegration: true, contextIsolation: false }. Probably helps to add backgroundThrottling: false too.

In the renderer process, the best way I found to do this is to include your audioWorklet code as a string that is turned into a Blob and then an object URL. This is because require inside workers in Electron has some quirks, it does not have the relative path resolution capabilities of regular require due to isolation/security, so you have to feed it the absolute path to the Audify module, which of course is dynamic depending on whether the app is packaged and where your app is installed on the user's system. So to get around this you must get the path to Audify at runtime in the renderer using require.resolve().

Also, note that in audioWorklet the web audio buffer size is fixed at 128, at least for now (see the notes here). So it's easiest to manage if you set your RtAudio stream to have a 128 frameSize, though it is possible to manage smaller or larger sizes if needed (sometimes it's necessary with Windows ASIO, see #18), just increases the complexity of the worker code a bit.

So a basic example has renderer code that looks something like this (assumes you have already created an audioContext, defined what deviceId and firstChannel you want to use, and a fixed buffer size of 128, also assumes using ASIO):

var RtNode;
audioContext.audioWorklet.addModule(URL.createObjectURL(new Blob([`
  const RtASIOInstance = new (require(${JSON.stringify(require.resolve('audify'))}).RtAudio)(7);
  const defaultSamplesPerFrame = 128;
  const samplesPerFrame2x = defaultSamplesPerFrame * 2;
  RtASIOInstance.openStream(
    { deviceId: ${yourChosenDeviceID}, nChannels: 2, firstChannel: ${yourChosen1stChannel} },
    undefined,
    16, //float32 is the native format web audio delivers buffers to process()
    ${audioContext.sampleRate},
    defaultSamplesPerFrame,
    "MyStream",
    undefined,
    () => {
      console.log("played 1st buffer successfully");
      RtASIOInstance.setFrameOutputCallback();
    },
    1, //non-interleaved
    (error, message) => console.warn(error, message)
  );
  RtASIOInstance.start();
  class RtRouter extends AudioWorkletProcessor {
    constructor () {
      //might want to do stuff here like initialize messaging between the worker and renderer process
      //super();
      //this.port.onmessage = ...
    }
    process (inputs) {
      if (inputs[0]?.length > 1) { //inputs will be empty if no audio node is connected or something goes wrong in the renderer
        const bufferConcatenation = new Float32Array(samplesPerFrame2x);
        bufferConcatenation.set(inputs[0][0]); //left channel data
        bufferConcatenation.set(inputs[0][1], defaultSamplesPerFrame); //right channel data
        RtASIOInstance.write(bufferConcatenation);
      }
      return true;
    }
  }
  registerProcessor("deviceRouter", RtRouter);
`], { type: "application/javascript; charset=utf-8" }))).then(() => {
  RtNode = new AudioWorkletNode(audioContext, "deviceRouter");  
  RtNode.onprocessorerror = (e) => console.warn("error from RtNode", e);
  //now you can do anyWebAudioNode.connect(RtNode);
}).catch((reason) => console.warn("Rt device routing failed", reason));

Obviously this is a simple example, in real world use you'll likely want to handle messaging with the worklet, run getDevices() and return the result etc. The above code is untested I just adapted it from more complex code specific to my own app, apologies in advance if it has some stupid typo or error :)

@Durgaprasad-Budhwani
Copy link

HI, @marcelblum Thank you very much for the code snippet. Much Appreciated. I will reach out to you again if I am stuck at any place.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants