A live Graph Object Model for the Web Audio API
JavaScript HTML
Permalink
Failed to load latest commit information.
audio
build
clock
elements/soundstage-audio
js
modules
polyfills
sequence
test
.eslintrc.json
.gitignore
.gitmodules
README.md
gulpfile.js
index.html
karma.conf.js
npm-debug.log
package.json
piano.html

README.md

Soundstage

Soundstage is a Graph Object Model for Web Audio processing graphs. It provides an API for creating, manipulating and observing graphs, and a JSONify-able structure for exporting and importing them.

Soundstage is the library that powers sound.io.

Dependencies and tests

Soundstage is in development. It is currently dependent on three repos that can be installed as git submodules:

Install with submodules:

git clone https://github.com/soundio/soundstage.git
cd soundstage
git submodule update --init

Tests use Karma. To run tests:

npm install
karma start

Soundstage(data, options) – overview

Soundstage data is an object with properties that define audio objects, the connections between them, a MIDI map and playable sequences. All properties are optional:

var data = {
    objects: [
        { id: 0, type: "input" },
        { id: 1, type: "flange", frequency: 0.33, feedback: 0.9, delay: 0.16 },
        { id: 2, type: "output" }
    ],

    connections: [
        { source: 0, destination: 1 },
        { source: 1, destination: 2 }
    ],

    midi: [
        { message: [176, 8], object: 1, property: "frequency" }
    ],

    presets: [],

    sequence: []
};

objects is an array of audio objects (an audio object is a wrapper for a Web Audio node graph). In Soundstage, audio objects must have an id and type. Other properties depend on the audio params that this type of audio object exposes.

connections is an array of connection objects defining connections between the audio objects.

midi is an array of routes for incoming MIDI messages.

presets is an array of presets used by audio objects.

sequence is a Music JSON sequence array of events. The sequence is played on soundstage.sequence.start().

Call Soundstage with this data to set it up as an audio graph:

var soundstage = Soundstage(data);

Turn your volume down a bit, enable the mic when prompted by the browser, and you will hear your voice being flanged.

The resulting object, soundstage, has the same structure as data, so the graph can be converted back to data with:

JSON.stringify(soundstage);

This means you can export an audio graph you have made at, say, sound.io – open the console and run JSON.stringify(soundstage) – and import it into your own web page – call Soundstage(data) with the data.

Soundstage also accepts an options object. There is currently one option. Where your page has an existing audio context, pass it in to have Soundstage use it:

var soundstage = Soundstage(data, { audio: myAudioContext });

soundstage methods

.create()

soundstage.create(type, settings);

Creates an audio object of type, with settings giving values for it's properties.

var delay = soundstage.create('delay', { time: 1 });
var output = soundstage.outputs[0];

soundstage.connect(delay, output);

Soundstage comes with audio object types:

"input"
"output"
"biquad-filter"
"compressor"
"convolver"
"delay"
"filter"
"flanger"
"loop"
"oscillator"
"pan"
"saturate"
"sampler"
"send"
"signal-detector"
"tone-synth"
"waveshaper"

You can also add your own audio objects with Soundstage.register(type, fn, settings).

.createInputs()

soundstage.createInputs();

Creates as many input audio objects as your input device will allow* (adding them to soundstage.objects along the way).

If information about your input device is not available yet (when the promise Soundstage.requestMedia(audio) is resolved), then three input audio objects are created by default, from the input channels Stereo L-R, Mono L and Mono R. More are created if the device allows it when it becomes available.

console.log(soundstage.inputs)

[
    { type: "input", id: 1, channels: [0,1] },
    { type: "input", id: 2, channels: [0] },
    { type: "input", id: 3, channels: [1] }
]

*Currently, multi-channel input devices are not supported by browsers.

.createOutputs()

soundstage.createOutputs();

Creates as many output audio objects as your output device will allow (adding them to soundstage.objects along the way).

One output audio object is created by default, for destination Stereo 1-2.

console.log(soundstage.outputs)

[
    { type: "output", id: 4, channels: [0,1] }
]

.connect()

soundstage.connect(source, destination);

Connects the default output of source to the default input of destination, where source and destination are audio objects or ids of audio objects.

soundstage.connect(source, destination, outName, inName);

Connects the named output of source to named input of destination.

.disconnect()

soundstage.disconnect(source, destination);

Disconnects the default output of source from the default input of destination.

soundstage.connect(source, destination, outName, inName);

Disconnects the named output of source from the named input of destination.

.clear()

Remove and destroy all objects, connections, midi maps and sequences.

.destroy()

Removes and destroys all objects and connections, disconnects any media inputs from soundstage's input, and disconnects soundstage's output from audio destination.

.find()

soundstage.find(id)

Returns the audio objects with id.

.query()

soundstage.query(selector)

Takes either a selector string or a query object and returns an array of matching audio objects.

soundstage.query('[type="tone-synth"]');
soundstage.query({ type: 'tone-synth' });

.stringify()

soundstage.stringify()

Returns the JSON string JSON.stringify(soundstage).

.update()

soundstage.update(data);

Creates new objects, or updates existing objects, from data.

soundstage.update({
    objects: [
        { type: "flanger", id: 5 },
        { type: "looper", id: 6 }
    },
    connections: [
        { source: 5, destination: 6 }
    ]
});

Soundstage(data) uses soundstage.update(data) internally when initially creating a soundstage.

soundstage properties

.tempo

var tempo = soundstage.tempo;

Gets and sets the tempo. A shortcut for controlling soundstage.clock.rate, where

soundstage.tempo = 60;

sets the clock rate to 1 beat per second.

.objects

A collection of audio objects. An audio object controls one or more audio nodes. In soundstage, audio objects have an id and a type. name is optional. Other properties depend on the type.

var flanger = soundstage.objects.find(1);

{
    id: 7,
    type: "flange",
    frequency: 256
}

Changes to flanger.frequency are reflected immediately in the Web Audio graph.

flanger.frequency = 480;

// flanger.automate(name, value, time, curve)
flanger.automate('frequency', 2400, audio.currentTime + 0.8, 'exponential');

For more about audio objects see github.com/soundio/audio-object.

soundstage.objects.create(type, settings)

Create an audio object. type is a string, properties of settings depend on the type.

Returns the created audio object. Created objects can also be found in soundstage.objects, as well as in soundstage.inputs and soundstage.outputs if they are of type "input" or "output" respectively.

soundstage.objects.delete(object || id)

Destroy an audio object in the graph. Both the object and any connections to or from the object are destroyed.

soundstage.objects.find(id || query)
soundstage.objects.query(query)

soundstage.objects is published in JSON.stringify(soundstage).

soundstage.inputs

A subset collection of soundstage.objects, containing only type 'input' audio objects.

soundstage.inputs is NOT published in JSON.stringify(soundstage).

soundstage.outputs

A subset collection of soundstage.objects, containing only type 'output' audio objects.

soundstage.outputs is NOT published in JSON.stringify(soundstage).

soundstage.connections

A collection of connections between the audio objects in the graph. A connection has a source and a destination that point to ids of objects in soundstage.objects:

{
    source: 7,
    destination: 12
}

In addition a connection can define a named output node on the source object and/or a named input node on the destination object:

{
    source: 7,
    output: "send",
    destination: 12,
    input: "default"
}


soundstage.connections.create(data)

Connect two objects. data must have source and destination defined. Naming an output or input is optional. They will default to "default".

soundstage.connections.create({
    source: 7,
    output: "send",
    destination: 12
});


soundstage.connections.delete(query)

Removes all connections whose properties are equal to the properties defined in the query object. For example, disconnect all connections to object with id 3:

soundstage.connections.query({ destination: 3 });


soundstage.connections.query(query)

Returns an array of all objects in connections whose properties are equal to the properties defined in the query object. For example, get all connections from object with id 6:

soundstage.connections.query({ source: 6 });

soundstage.clock

An instance of Clock, which requires the repo github.com/soundio/clock. If Clock is not found, soundstage.clock is undefined.

soundstage.clock maps a beat clock against the audio context's time clock, and publishes properties and methods for scheduling function calls. It is also an AudioObject with two output nodes, "rate" and "duration", for syncing Web Audio parameters to tempo.

soundstage.clock is not published by JSON.stringify(soundstage).

.time

The current time. Gets audio.currentTime. Read-only.

.beat

The current beat. Gets clock.beatAtTime(audio.currentTime). Read-only.

.rate

The current rate, in beats per second.

.timeAtBeat(beat)

Returns the audio context time at beat.

.beatAtTime(time)

Returns the beat at time.

.automate(name, value, time)

// Move to 120bpm in 2.5 seconds
clock.automate('rate', 2, clock.time + 2.5);

Inherited from AudioObject.

.tempo(beat, tempo)

Creates a tempo change at a time given by beat. If beat is not defined, the clock creates a tempo change at the current beat.

.find(beat)

Returns tempo change found at beat or undefined.

.remove(beat)

Removes tempo change found at beat.

.on(beat, fn)

Shorthand for clock.cue(beat, fn, 0), calls fn at the beat specified (0 ms lookahead).

.cue(beat, fn)

Cue a function to be called just before beat. fn is called with the argument time, which can used to accurately schedule Web Audio changes.

clock.cue(42, function(time) {
    gainParam.setValueAtTime(time, 0.25);
    bufferSourceNode.start(time);
});

Pass in a third parameter lookahead to override the default (0.05s) lookahead:

clock.cue(44, function(time) {
    gainParam.setValueAtTime(time, 1);
    bufferSourceNode.stop(time);
}, 0.08);

.uncue(beat, fn)

Removes fn at beat from the timer queue. Either, neither or both beat and fn can be given.

Remove all cues from the timer queue:

clock.uncue();

Remove cues at beat from the timer queue:

clock.uncue(beat);

Remove cues to fire fn from the timer queue:

clock.uncue(fn);

Remove cues at beat to fire fn from the timer queue:

clock.uncue(beat, fn)

.uncueAfter(beat, fn)

Removes fn after beat from the timer queue. fn is optional.

Remove all cues after beat from the timer queue:

clock.uncueAfter(beat);

Remove all cues after beat to fire fn from the timer queue:

clock.uncueAfter(beat, fn)

.onTime(time, fn)

Shorthand for clock.cueTime(time, fn, 0), calls fn at the time specified (0 ms lookahead).

.cueTime(time, fn)

Cue a function to be called just before time. fn is called with the argument time, which can used to accurately schedule changes to Web Audio parameters:

clock.cue(42, function(time) {
    gainParam.setValueAtTime(time, 0.25);
    bufferSourceNode.start(time);
});

Pass in a third parameter lookahead to override the default (0.05s) lookahead:

clock.cue(44, fn, 0.08);

.uncueTime(time, fn)

Removes fn at time from the timer cues. Either, neither or both time and fn can be given.

Remove all cues from the timer queue:

clock.uncueTime();

Remove cues at time from the timer queue:

clock.uncueTime(time);

Remove cues to fire fn from the timer queue:

clock.uncueTime(fn);

Remove cues at time to fire fn from the timer queue:

clock.uncueTime(time, fn)

.uncueAfterTime(time, fn)

Removes fn after time from the timer queue. fn is optional.

Remove all cues after time from the timer queue:

clock.uncueAfterTime(time);

Remove all cues after time for fn from the timer queue:

clock.uncueAfterTime(time, fn)

soundstage.midi

A collection of MIDI routes that make object properties controllable via incoming MIDI events. A midi route looks like this:

{
    message:   [191, 0],
    object:    AudioObject,
    property:  "gain",
    transform: "linear",
    min:       0,
    max:       1
}


soundstage.midi.create(data)

Create a MIDI route from data:

soundstage.midi.create({
    message:   [191, 0],
    object:    1,
    property:  "gain",
    transform: "cubic",
    min:       0,
    max:       2
});

The properties transform, min and max are optional. They default to different values depending on the type of the object.

soundstage.midi.delete(query)

Removes all MIDI routes whose properties are equal to the properties defined in the query object. For example, disconnect all routes to gain properties:

soundstage.midi.query({ property: "gain" });


soundstage.midi.query(query)

Returns an array of all objects in soundstage.midi whose properties are equal to the properties defined in the query object. For example, get all connections from object with id 6:

soundstage.connections.query({ object: 6 });

Soundstage

Soundstage.register(type, function)

Register an audio object constructor function for creating audio objects of type.

Soundstage.register('my-audio-object', MyAudioObjectConstructor);

MyAudioObjectConstructor receives the parameters:

function MyAudioObjectConstructor(audio, settings, clock, presets) {
    var options = assign({}, defaults, settings);
    // Set up audio object
};

settings is an object that comes directly from set-up data passed to soundstage.objects.create(type, settings) or Soundstage(data). You should make sure the registered audio object correctly initialises itself from settings, and JSON.stringifys back to settings.

Soundstage comes with several audio object constructors already registered:

// Single node audio objects
'biquad-filter'
'compressor'
'convolver'
'delay'
'oscillator'
'waveshaper'

// Multi node audio objects
'compress'
'flange'
'loop'
'filter'
'saturate'
'send'

Overwrite them at your peril.

.getEventDuration()

Soundstage.getEventDuration(event);

Returns the duration of a sequence event.

.getEventsDuration()

Soundstage.getEventsDuration(events);

Returns the duration of a collection of events, a sequence.

.getInput()

Soundstage.getInput(object);

Returns the default input AudioNode of an AudioObject.

Soundstage.getInput(object, 'rate');

Returns the named input AudioNode of the AudioObject object.

If object is not an AudioObject, returns undefined.

.getOutput()

Soundstage.getOutput(object);

Returns the default output AudioNode of an AudioObject.

Soundstage.getInput(object, 'rate');

Returns the named output AudioNode of the AudioObject object.

If object is not an AudioObject, returns undefined.

.isAudioContext()

Soundstage.isAudioContext(object);

Returns true where object is an AudioContext.

.isAudioNode()

Soundstage.isAudioNode(object);

Returns true where object is an AudioNode.

.isAudioParam()

Soundstage.isAudioParam(object);

Returns true where object is an AudioParam.

.isAudioObject()

Soundstage.isAudioObject(object);

Returns true where object is an AudioObject.

.isDefined()

Soundstage.isDefined(object)

Returns true where object is not undefined or null.

.isEvent()

Soundstage.isEvent(object)

Returns true where object is a sequence event. Uses duck typing - events have no prototype to check for type.

.requestMedia()

Soundstage.requestMedia(audio).then(function(mediaNode) {
    mediaNode.connect(...);
});

Given the audio context audio, requestMedia returns a promise that resolves to a MediaStreamSourceNode. That node carries the stream from the device's physical audio inputs.

Only one MediaStreamSourceNode is created per audio context.

.features

Soundstage.features

An object of results from feature tests.

Soundstage.features.disconectParameters

true if the Web Audio API supports disconnecting specified nodes via the node1.disconnect(node2), otherwise false.

Author

Stephen Band @stephband