Skip to content
This repository has been archived by the owner on Nov 4, 2021. It is now read-only.

mikesol/purescript-audio-behaviors

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

purescript-audio-behaviors

UPDATE. This repo is archived and is no longer being maintained. I've since created purescript-wags, which is faster and more ergonomic. Please use that!

purescript-behaviors for web audio.

Demo

Check out klank.dev, where the klank-studio directory has examples of this being used in the browser.

Installation

spago install

Build

spago build

Main idea

This library uses the behaviors pattern pioneered by Conal Elliott and Paul Hudak. You describe the way audio should behave at a given time, and the function is sampled at regular intervals to build the audio graph.

For example, consider the following behavior, taken from HelloWorld.purs:

scene ::  Number -> Behavior (AudioUnit D1)
scene time = let
      rad = pi * time
    in
      pure $ speaker
         ( (gain' 0.1 $ sinOsc (440.0 + (10.0 * sin (2.3 * rad))))
              :| (gain' 0.25 $ sinOsc (235.0 + (10.0 * sin (1.7 * rad))))
              : (gain' 0.2 $ sinOsc (337.0 + (10.0 * sin rad)))
              : (gain' 0.1 $ sinOsc (530.0 + (19.0 * (5.0 * sin rad))))
              : Nil
          )

Here, there are four sine wave oscillators whose frequencies modulate subtly based on time, creating an eerie Theremin effect. Under the hood, this library samples the function to know what the frequencies should be played at any given time and makes sure they are rendered to the speaker.

Building a scene

The main unit of work in purescript-audio-behaviors is the scene. A scene, like the one above, is a function of time, where the input time comes from the audio clock at regular intervals.

In this section, we'll build a scene from the ground up. In doing so, we'll accomplish several things:

  1. Getting a static sound to play.
  2. Adding sound via the microphone.
  3. Adding playback from an audio tag.
  4. Going from mono to stereo.
  5. Getting the sound to change as a function of time.
  6. Getting the sound to change as a function of a mouse input event.
  7. Making sure that certain sounds occur at a precise time.
  8. Remembering when events happened.
  9. Working with feedback.
  10. Adding visuals.

Getting a static sound to play

Let's start with a sine wave at A440 playing at a volume of 0.5 (where 1.0 is the loudest volume).

scene :: AudioUnit D1
scene = speaker' $ (gain' 0.5 $ sinOsc 440.0)

For simple audio graphs, we do not need to use behaviors and can just use the AudioUnit ch type, where ch is the number of channels prefixed by D. As the example above is mono, D1 is the number of channels.

Adding sound via the microphone

Let's add our voice to the mix! We'll put it above a nice low drone.

scene :: AudioUnit D1
scene =
  speaker
    $ ( (gain' 0.2 $ sinOsc 110.0)
          :| (gain' 0.1 $ sinOsc 220.0)
          : microphone
          : Nil
      )

Make sure to wear headphones to avoid feedback!

Adding playback from an audio tag

Let's add some soothing jungle sounds to the mix. We use the function play to add an audio element. This function assumes that you provide an audio element with the appropriate tag to the toplevel runInBrowser function. In this case, the tag is "forest".

-- assuming we have passed in an object
-- with { forest: new Audio("my-recording.mp3") }
-- to `runInBrowser`
scene :: AudioUnit D1
scene =
  speaker
    $ ( (gain' 0.2 $ sinOsc 110.0)
          :| (gain' 0.1 $ sinOsc 220.0)
          : (gain' 0.5 $ (playBuf "forest" 1.0))
          : microphone
          : Nil
      )

Going from mono to stereo

To go from mono to stereo, there is a class of functions called dupX, splitX and merger. In the example below, we use dup1 to duplicate a mono sound and then merge it into two stereo tracks.

If you want to make two separate audio units, then you can use a normal let block. If, on the other hand, you want to use the same underlying unit, use dupX. When in doubt, use dupX, as you'll rarely need to duplicate an identical audio source.

scene :: AudioUnit D2
scene =
  dup1
    ( (gain' 0.2 $ sinOsc 110.0)
        + (gain' 0.1 $ sinOsc 220.0)
        + microphone
    ) \mono ->
    speaker
      $ ( (panner (-0.5) (merger (mono +> mono +> empty)))
            :| (gain' 0.5 $ (playBuf "forest" 1.0))
            : Nil
        )

Getting the sound to change as a function of time

Up until this point, our audio hasn't reacted to many behaviors. Let's fix that! One behavior to react to is the passage of time. Let's add a slow undulation to the lowest pitch in the drone that is based on the passage of time

scene :: Number -> AudioUnit D2
scene time =
  let
    rad = pi * time
  in
    dup1
      ( (gain' 0.2 $ sinOsc (110.0 + (10.0 * sin (0.2 * rad))))
          + (gain' 0.1 $ sinOsc 220.0)
          + microphone
      ) \mono ->
      speaker
        $ ( (panner (-0.5) (merger (mono +> mono +> empty)))
              :| (gain' 0.5 $ (playBuf "forest" 1.0))
              : Nil
          )

Getting the sound to change as a function of a mouse input event

The next snippet of code uses the mouse to modulate the pitch of the higher note by roughly a major third.

scene :: Mouse -> Number -> Behavior (AudioUnit D2)
scene mouse time = f time <$> click
  where
  f s cl =
    let
      rad = pi * s
    in
      dup1
        ( (gain' 0.2 $ sinOsc (110.0 + (10.0 * sin (0.2 * rad))))
            + (gain' 0.1 $ sinOsc (220.0 + (if cl then 50.0 else 0.0)))
            + microphone
        ) \mono ->
        speaker
          $ ( (panner (-0.5) (merger (mono +> mono +> empty)))
                :| (gain' 0.5 $ (playBuf "forest" 1.0))
                : Nil
            )

  click :: Behavior Boolean
  click = map (not <<< isEmpty) $ buttons mouse

Making sure that certain sounds occur at a precise time

Great audio is all about timing, but so far, we have been locked to scheduling events at multiples of the control rate. The most commonly used control rate for this library is 50Hz (meaning one event every 0.02 seconds), which is too slow to accurately depict complex rhythmic events.

To fix the control rate problem, parameters that can change in time like frequency or gain have an optional second parameter that specifies the offset, in seconds, from the current quantized value in the control rate. The type of this parameter is AudioParameter, and it has several other values that can be set to precisely control how values change over time.

Using AudioParameter directly is an advanced feature that will be discussed below. The most common way to use AudioParameter is through the function evalPiecewise, which accepts the control rate in seconds (in our case, 0.02), a piecewise function in the form Array (Tuple time value) where time and value are both Numbers, and the current time.

Let's add a small metronome on the inside of our sound. We will have it beat every 0.9 seconds, and we use the function gainT' instead of gain to accept the AudioParameter output by epwf.

-- a piecewise function that creates an attack/release/sustain envelope
-- at a periodicity of every 0.9 seconds
pwf :: Array (Tuple Number Number)
pwf =
  join
    $ map
        ( \i ->
            map
              ( \(Tuple f s) ->
                  Tuple (f + 0.11 * toNumber i) s
              )
              [ Tuple 0.0 0.0, Tuple 0.02 0.7, Tuple 0.06 0.2 ]
        )
        (range 0 400)

kr = 20.0 / 1000.0 :: Number -- the control rate in seconds, or 50 Hz

epwf = evalPiecewise kr :: Array (Tuple Number Number) -> Number -> AudioParameter

scene :: Mouse -> Number -> Behavior (AudioUnit D2)
scene mouse time = f time <$> click
  where
  f s cl =
    let
      rad = pi * s
    in
      dup1
        ( (gain' 0.2 $ sinOsc (110.0 + (3.0 * sin (0.5 * rad))))
            + (gain' 0.1 (gainT' (epwf pwf s) $ sinOsc 440.0))
            + (gain' 0.1 $ sinOsc (220.0 + (if cl then 50.0 else 0.0)))
            + microphone
        ) \mono ->
        speaker
          $ ( (panner (-0.5) (merger (mono +> mono +> empty)))
                :| (gain' 0.5 $ (playBuf "forest" 1.0))
                : Nil
            )

  click :: Behavior Boolean
  click = map (not <<< isEmpty) $ buttons mouse

Remembering when events happened

Sometimes, you don't just want to react to an event like a mouse click. You want to remember when the event happened in time. For example, imagine that we modulate a pitch whenever a button is clicked, like in the example below. When you click the mouse, the pitch should continue slowly rising until the mouse button is released.

To accomplish this, or anything where memory needs to be retained, the scene accepts an arbitrary accumulator as its first parameter. You can think of it as a fold over time.

To make the accumulator useful, the scene should return the accumulator as well. The constructor IAudioUnit allows for this: it accepts an audio unit as well as an accumulator.

pwf :: Array (Tuple Number Number)
pwf =
  join
    $ map
        ( \i ->
            map
              ( \(Tuple f s) ->
                  Tuple (f + 0.11 * toNumber i) s
              )
              [ Tuple 0.0 0.0, Tuple 0.02 0.7, Tuple 0.06 0.2 ]
        )
        (range 0 400)

kr = 20.0 / 1000.0 :: Number -- the control rate in seconds, or 50 Hz

epwf = evalPiecewise kr

initialOnset = { onset: Nothing } :: { onset :: Maybe Number }

scene ::
  forall a.
  Mouse ->
  { onset :: Maybe Number | a } ->
  Number ->
  Behavior (IAudioUnit D2 { onset :: Maybe Number | a })
scene mouse acc@{ onset } time = f time <$> click
  where
  f s cl =
    IAudioUnit
      ( dup1
          ( (gain' 0.2 $ sinOsc (110.0 + (3.0 * sin (0.5 * rad))))
              + (gain' 0.1 (gainT' (epwf pwf s) $ sinOsc 440.0))
              + (gain' 0.1 $ sinOsc (220.0 + (if cl then (50.0 + maybe 0.0 (\t -> 10.0 * (s - t)) stTime) else 0.0)))
              + microphone
          ) \mono ->
          speaker
            $ ( (panner (-0.5) (merger (mono +> mono +> empty)))
                  :| (gain' 0.5 $ (playBuf "forest" 1.0))
                  : Nil
              )
      )
      (acc { onset = stTime })
    where
    rad = pi * s

    stTime = case Tuple onset cl of
      (Tuple Nothing true) -> Just s
      (Tuple (Just y) true) -> Just y
      (Tuple _ false) -> Nothing

  click :: Behavior Boolean
  click = map (not <<< isEmpty) $ buttons mouse

Because the accumulator object is global for an entire audio graph, it's a good idea to use row polymorphism in the accumulator object. While using keys like onset is fine for small projects, if you're a library developer, you'll want to make sure to use keys more like namespaces. That is, you'll want to make sure that they do not conflict with other vendors' keys and with users' keys. A good practice is to use something like { myLibrary :: { param1 :: Number } | a }.

Working with feedback

Our microphone has been pretty boring up until now. Let's create a feedback loop to spice things up.

A feedback loop is created when one uses the processed output of an audio node as an input to itself. One classic physical feedback loop is echo between two walls: the delayed audio bounces back and forth, causing really interesting and surprising effects.

Because audio functions like gain consume other audio functions like sinOsc, there is no way to create a loop by composing these functions. Instead, to create a feedback loop, we need to use the graph function to create an audio graph.

An audio graph is a row with three keys: accumulators, processors and generators. generators can be any function that creates audio (including graph itself). processors are unary audio operators like filters and convolution. All of the audio functions that do this, like highpass and waveShaper, have graph analogues with g' prepended, ie g'highpass and g'waveShaper. aggregators are n-ary audio operators like g'add, g'mul and g'gain (gain is just addition composed with multiplication of a constant, and the special gain function does this in an efficient way).

The audio graph must respect certain rules: it must be fully connected, it must have a unique terminal node, it must have at least one generator, it must have no orphan nodes, it must not have duplicate edges between nodes, etc. Violating any of these rules will result in a type error at compile-time.

The graph structure is represented using incoming edges, so processors have only one incoming edge whereas accumulators have an arbitrary number of incoming edges, as we see below. Play it and you'll hear an echo effect!

pwf :: Array (Tuple Number Number)
pwf =
  join
    $ map
        ( \i ->
            map
              ( \(Tuple f s) ->
                  Tuple (f + 0.11 * toNumber i) s
              )
              [ Tuple 0.0 0.0, Tuple 0.02 0.7, Tuple 0.06 0.2 ]
        )
        (range 0 400)

kr = 20.0 / 1000.0 :: Number -- the control rate in seconds, or 50 Hz

epwf = evalPiecewise kr

initialOnset = { onset: Nothing } :: { onset :: Maybe Number }

scene ::
  forall a.
  Mouse ->
  { onset :: Maybe Number | a } ->
  Number ->
  Behavior (IAudioUnit D2 { onset :: Maybe Number | a })
scene mouse acc@{ onset } time = f time <$> click
  where
  f s cl =
    IAudioUnit
      ( dup1
          ( (gain' 0.2 $ sinOsc (110.0 + (3.0 * sin (0.5 * rad))))
              + (gain' 0.1 (gainT' (epwf pwf s) $ sinOsc 440.0))
              + (gain' 0.1 $ sinOsc (220.0 + (if cl then (50.0 + maybe 0.0 (\t -> 10.0 * (s - t)) stTime) else 0.0)))
              + ( graph
                    { aggregators:
                        { out: Tuple g'add (SLProxy :: SLProxy ("combine" :/ SNil))
                        , combine: Tuple g'add (SLProxy :: SLProxy ("gain" :/ "mic" :/ SNil))
                        , gain: Tuple (g'gain 0.9) (SLProxy :: SLProxy ("del" :/ SNil))
                        }
                    , processors:
                        { del: Tuple (g'delay 0.2) (SProxy :: SProxy "filt")
                        , filt: Tuple (g'bandpass 440.0 1.0) (SProxy :: SProxy "combine")
                        }
                    , generators:
                        { mic: microphone
                        }
                    }
                )
          ) \mono ->
          speaker
            $ ( (panner (-0.5) (merger (mono +> mono +> empty)))
                  :| (gain' 0.5 $ (playBuf "forest" 1.0))
                  : Nil
              )
      )
      (acc { onset = stTime })
    where
    rad = pi * s

    stTime = case Tuple onset cl of
      (Tuple Nothing true) -> Just s
      (Tuple (Just y) true) -> Just y
      (Tuple _ false) -> Nothing

  click :: Behavior Boolean
  click = map (not <<< isEmpty) $ buttons mouse

Adding visuals

Let's add a little dot that gets bigger when we click. We'll do that using the AV constructor that accepts a Drawing.

pwf :: Array (Tuple Number Number)
pwf =
  join
    $ map
        ( \i ->
            map
              ( \(Tuple f s) ->
                  Tuple (f + 0.11 * toNumber i) s
              )
              [ Tuple 0.0 0.0, Tuple 0.02 0.7, Tuple 0.06 0.2 ]
        )
        (range 0 400)

kr = 20.0 / 1000.0 :: Number -- the control rate in seconds, or 50 Hz

epwf = evalPiecewise kr

initialOnset = { onset: Nothing } :: { onset :: Maybe Number }

scene ::
  forall a.
  Mouse ->
  { onset :: Maybe Number | a } ->
  CanvasInfo ->
  Number ->
  Behavior (AV D2 { onset :: Maybe Number | a })
scene mouse acc@{ onset } (CanvasInfo { w, h }) time = f time <$> click
  where
  f s cl =
    AV
      { audio:
          Just
            $ dup1
                ( (gain' 0.2 $ sinOsc (110.0 + (3.0 * sin (0.5 * rad))))
                    + (gain' 0.1 (gainT' (gn s) $ sinOsc 440.0))
                    + (gain' 0.1 $ sinOsc (220.0 + (if cl then (50.0 + maybe 0.0 (\t -> 10.0 * (s - t)) stTime) else 0.0)))
                    + ( graph
                          { aggregators:
                              { out: Tuple g'add (SLProxy :: SLProxy ("combine" :/ SNil))
                              , combine: Tuple g'add (SLProxy :: SLProxy ("gain" :/ "mic" :/ SNil))
                              , gain: Tuple (g'gain 0.9) (SLProxy :: SLProxy ("del" :/ SNil))
                              }
                          , processors:
                              { del: Tuple (g'delay 0.2) (SProxy :: SProxy "filt")
                              , filt: Tuple (g'bandpass 440.0 1.0) (SProxy :: SProxy "combine")
                              }
                          , generators:
                              { mic: microphone
                              }
                          }
                      )
                ) \mono ->
                speaker
                  $ ( (panner (-0.5) (merger (mono +> mono +> empty)))
                        :| (gain' 0.5 $ (play "forest"))
                        : Nil
                    )
      , visual:
          Just
            { painting:
                const
                  $ filled
                      (fillColor (rgb 0 0 0))
                      ( circle
                          (if cl then toNumber ps.x - x else w / 2.0)
                          (if cl then toNumber ps.y - y else h / 2.0)
                          (if cl then 25.0 else 5.0)
                      )
            , words: mempty
            }
      , accumulator: acc { onset = stTime }
      }
    where
    rad = pi * s

    stTime = case Tuple onset cl of
      (Tuple Nothing true) -> Just s
      (Tuple (Just y) true) -> Just y
      (Tuple _ false) -> Nothing

  click :: Behavior Boolean
  click = map (not <<< isEmpty) $ buttons mouse

Conclusion

We started with a simple sound and built all the way up to a complex, precisely-timed stereo structure with feedback that responds to mouse events both visually and sonically. These examples also exist in Readme.purs.

From here, the only thing left is to make some noise! There are many more audio units in the library, such as filters, compressors and convolvers. Almost the whole Web Audio API is exposed.

To see a list of exported audio units, you can check out Audio.purs. In a future version of this, we will refactor things so that all of the audio units are in one package.

MIDI

The file src/FRP/Behavior/MIDI.purs exposes one function - midi - that can be used in conjunction with getMidi src/FRP/Event/MIDI.purs to incorporate realtime MIDI data into the audio graph. For an example of how this is done, check out examples/midi-in.

Interacting with the browser

In simple setups, you'll interact with the browser in a <script> tag to create resources like buffers and float arrays. This is how it is done in most of the ./examples directory. However, sometimes you'll be creating a webpage using PureScript, in which case you may need to create a browser-specific resource like an audio buffer for a playBuf in PureScript.

To this end, there are several helper functions that allow you to interact directly with the browser. The advantage of these functions is that they link into the purescript-audio-behaviors type system. However, as they are just assigning types to opaque blobs from the browser, you can also use your own FFI functions and cast the results to types understood by this library.

-- creates a new audio context
-- necessary for some of the functions below and for `runInBrowser`
makeAudioContext :: Effect AudioContext

-- decode audio data from a Uri, ie a link to a wav file
decodeAudioDataFromUri :: AudioContext -> String -> Effect (Promise BrowserAudioBuffer)

-- decode audio data from a base 64 encoded string, passed directly as an argument
decodeAudioDataFromBase64EncodedString :: AudioContext -> String -> Effect (Promise BrowserAudioBuffer)

-- make an audio track
-- the advantage of audio tracks over audio buffers is that
-- they are streamed, so you don't need to wait for them to be downloaded
-- to start playing
makeAudioTrack :: String -> Effect BrowserAudioTrack

-- make an audio buffer
-- best for things like creating drum machines, impulse responses, granular synthesis etc
-- basically anything short
-- for anything that resembles streaming, use makeAudioTrack
makeAudioBuffer :: AudioContext -> AudioBuffer -> Effect BrowserAudioBuffer

-- makes a 32-bit float array
-- useful when creating wave shapers (this is what adds the distortion)
makeFloatArray :: Array Number -> Effect BrowserFloatArray

-- makes a periodic wave
-- this is what is used for the periodicOsc unit
makePeriodicWave ::
  forall len.
  Pos len =>
  AudioContext ->
  Vec len Number ->
  Vec len Number ->
  Effect BrowserPeriodicWave

Advanced usage

Here are some tips for advanced usage of purescript-audio-behaviors.

AudioParameter

The AudioParameter type is the type for floating-point values accepted by functions like gainT or sinOscT. For example:

osc0 = sinOsc 440.0
osc1 = sinOscT (defaultParam { param = 440.0 })

The AudioParameter type has the following signature:

type AudioParameter
  = { param :: Number
    , timeOffset :: Number
    , transition :: AudioParameterTransition
    , forceSet :: Boolean
    }
  • param: The floating point value
  • timeOffset: The offset in time from the previous control rate cycle. For example, if the control rate is 0.02 and the event should happen at 1.0735 seconds, then the value would be 0.0135, or 1.0735 % 0.02.
  • transition: How the transition should occur from the previous value. Options are NoRamp, LinearRamp, ExponentialRamp and Immedaitely. Immediately ignores the audio clock and schedules an event to happen ASAP, making it a good option for reactions to input events.
  • forceSet: Should we force the value to be set even if there is no change. As an optimization, values are not set unless they change from the previous value. While this works in most cases, it leads to audible clicks for linear and exponential transitions. Setting forceSet before a linear or exponential transition causes the starting value to be correct.

Exporting

purescript-audio-behaviors translates scenes to a sort of "assembly" language that is passed to an audio rendering function. This language has primitives like NewUnit for a new audio unit, ConnectTo, to connect one unit to another one, etc. These instructions are sent to an exporter for further downstream processing. Examples of downstream actions could be:

  • printing all processing information to a log, ie console.log
  • sending to a server for dispatching to MIDI devices or SuperCollider
  • tweeting your audio graph every 20ms (I don't know why you'd do this... but you could!)

The library provides a defaultExporter that is a no-op. To override this, pass an exporter with type Exporter to the runInBrowser function.

Named units

As you build larger and larger audio structures, you may notice some stuttering in your application.

One way to mitigate this is to give your audio units names. Named audio units speed up computation and result in less-glitchy corner cases when the audio graph changes radically.

Giving names is a bit tedious, so the recommendation is to build audio without names first and then, once you're satisfied with your scene, to give everything names. Here is how you attribute names to audio units:

sinOsc_ "My name" 440.0

Notice the trailing underscore after the function. That's all you need. If you add a trailing underscore, you'll get a function that can accept a name.

If you are building an audio function that is supposed to be reused (ie your own filter, etc), named audio is a great idea, as it will speed up computation everywhere the function is used.

myAwesomeFilter t = highpass_ ("myAwesomeFilter_" <> t) 1000.0 0.3 0.5

Tweaking engine parameters

The runInBrowser function takes an EngineInfo parameter that specifies how the audio and animation should be rendered.

type EngineInfo
  = { msBetweenSamples :: Int
    , msBetweenPings :: Int
    , fastforwardLowerBound :: Number
    , rewindUpperBound :: Number
    , initialOffset :: Number
    }
  • msBetweenSamples - The number of milliseconds between samples of the audio behavior. This is the effective control rate. The lower the rate, the more rhythmic precision, and the higher the rate, the less likely there will be jank. Try 20.
  • msBetweenPings - The number of milliseconds between pings to the sampling engine. The lower the rate, the less chance that you will miss a sampling deadline but the higher chance your page will lock up. This must be less than msBetweenSamples. Try 15.
  • fastforwardLowerBound - The number of seconds below which the audio engine will skip a frame. The lower this is, the less likely there will be a skip, but the more likely the skip will sound jarring if it happens. Try 0.025.
  • rewindUpperBound - The number of seconds of look-ahead. For uses that have no interactive component other than starting and stopping the sound (meaning no mouse, no MIDI Keyboard, etc) this can be large (ie 1.0 or even higher). For apps with an interactive component, you want this as low as possible, ie 0.06 or even lower. Note that this should be at least twice msBetweenSamples.
  • initialOffset - The number of seconds to wait before playing. JavaScript does a lot of memory allocation when audio starts playing, which sometimes results in jank. Try something between 0.1 and 0.4.
  • doWebAudio - Should we render all of this stuff towith the web audio API? If true, then yes, otherwise the Web Audio API won't be called. Useful when you want to use an exporter without making sound in the browser.

Bundling on your site

To see how to bundle this library on your site, please visit the examples directory.

To compile the JS for the hello world example, issue the following command:

spago -x examples.dhall bundle-app \
  --main FRP.Behavior.Audio.Example.HelloWorld \
  --to examples/hello-world/index.js

Other examples will work the same way, with the directory and module name changing.

You will also need to copy all of the files from the top-level custom-units folder into your project folder. With a correct setup, the hello-world directory should look like this:

examples/
  hello-world/
    HelloWorld.purs  # incldued in the git distro
    index.html # incldued in the git distro
    index.js # the generated js from spago bundle-app
    ps-aud-mul.js # plus any other files from the custom-units folder

From there, you can run python -m http.server in the directory of the example and it will serve all of the files. Visit http://localhost:8000 in Firefox to interact with the page.

About

DSP in the browser using the behavior pattern.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published