Skip to content

1.2. The WAAPI AudioWorklet Signal Engine

Francisco Bernardo edited this page Dec 5, 2020 · 14 revisions

Sema's signal engine is a core module responsible for everything audio-related, including control, rendering and playback.

The signal engine implements the singleton pattern for modularisation and it is therefore self-contained. It communicates externally through the publish-subscribe messaging system (check an example of how messaging is implemented for the signal engine) and an extensible message protocol.

This module builds a simple WAAPI graph with a Web Audio API AudioWorklet node for high performance, a destination node for sound playback, and on-request analyser nodes for visualisation. It also sets I/O channels and media streams that will be processed in the audio thread.

signal engine

The AudioWorklet Node

Sema's signal engine implements the AudioWorklet pattern where the AudioWorklet node loads a custom AudioWorklet processor, the maxi-processor.js. The former runs on the main Javascript thread, and the latter runs on a high-priority audio-dedicated thread, the AudioWorkletProcessor (AWP) scope. These two components have bidirectional communication through AudioWorlet's asynchronous messaging system this.port.onmessage and this.port.postMessage("...").

On the main Javascript thread, we use the AudioWorklet node port to:

  • inject the interpreted DSP code to dynamically evaluate on the AWP scope
  • inject audio samples that are load asynchronously in the main thread
  • inject a shared array buffer for shared memory access to signal to use in machine learning workers
  • receive signals for real-time visualization for a dedicated dashboard widget

The AudioWorklet Processor

The maxi-processor.js is where we load custom WebAssembly DSP modules such as Maximilian and Open303 into the AWP scope to use them in the evaluation of custom DSP expressions.

import Maximilian from './maximilian.wasmmodule.js';
import RingBuffer from "./ringbuf.js"; //thanks padenot
import Open303 from './open303.wasmmodule.js';

Most importantly this is where we dynamically evaluate these DSP expressions and hot-swap them for rendering audio. Details of the process are more formally described in our paper but here we provide an example to illustrate the process. For instance, when a user evaluates the code snippet below in a Live Code Editor widget for Sema's default language

:lfo:{{1}sin, 1,50}blin;
:osc:{50,0.2}pul;
>{:osc:, 500, :lfo:, 0, 1, 0,0}svf;

the signal engine receives the stringified code below (via publish-subscribe messaging system, as a result of a user-trigger "evaluation" in the user followed by parsing and interpretation in a worker thread).

// String with Setup function which is dynamically evaluated (i.e. eval() ) only ONCE in the processor
() => {
  let q = this.newq();
  q.b0u470 = new Maximilian.maxiOsc();
  q.b0u470.phaseReset(0);;;
  q.b0u471 = new Maximilian.maxiOsc();
  q.b0u471.phaseReset(0);;
  q.b0u473 = new Maximilian.maxiSVF();
  q.b0u473_p1 = new Maximilian.maxiTrigger();
  q.b0u473_p2 = new Maximilian.maxiTrigger();;;;
  return q;
}

// Loop function which is evaluated once for each audio sample, in the processor's PROCESS callback  
(q, inputs, mem) => {
  (mem[0] = Maximilian.maxiMap.linlin(q.b0u470.sinewave(1), -1, 1, 1,
    50));
  (mem[1] = q.b0u471.pulse(50, 0.2));
  this.dacOutAll((() => {
    q.b0u473_cutoff = 500;
    if (q.b0u473_p1.onChanged(q.b0u473_cutoff, 1e-5)) {
      q.b0u473.setCutoff(q.b0u473_cutoff)
    };
    q.b0u473_res = (mem[0] != undefined ? mem[0] : 0);
    if (q.b0u473_p2.onChanged(q.b0u473_res, 1e-5)) {
      q.b0u473.setResonance(q.b0u473_res)
    };
    return q.b0u473.play((mem[1] != undefined ? mem[1] : 0), 0,
      1, 0, 0)
  })());
}