Skip to content
Switch branches/tags

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

Sema – A Playground for Live Coding Music and Machine Learning

version stability-experimental PRs Welcome Build Status Website GitHub license

Sema is a playground where you can rapidly prototype live coding mini-languages for signal synthesis, machine learning and machine listening.

Sema aims to provide an online integrated environment for designing both abstract high-level languages and more powerful low-level languages.

Sema implements a set of core design principles:

  • Integrated signal engine – In terms language and signal engine integration, there is no conceptual split. Everything is a signal. However, for the sake of modularity, reusability, and a sound architecture, sema's signal engine is implemented by the sema-engine library.

  • Single sample signal processing – Per-sample sound processing for supporting techniques that use feedback loops, such as physical modelling, reverberation and IIR filtering.

  • Sample rate transduction – It is simpler to do signal processing with one principal sample rate, the audio rate. Different sample rate requirements of dependent objects can be resolved by upsampling and downsampling, using a transducer. The transducer concept enables us to accommodate a variety of processes with varying sample rates (video, spectral rate, sensors, ML model inference) within a single engine.

  • Minimal abstractions – There are no high-level abstractions such as buses, synths, nodes, servers, or any language scaffolding in our signal engine. Such abstractions sit within the end-user language design space.


Sema requires the following dependencies to be installed:

  • Chrome browser or any Chromium-based browser (e.g. Brave, Microsoft Edge, Opera)
  • Node.js active LTS version (currently v14.4.0). To switch between node versions, you can use nvm.
  • NPM cli OR Yarn

How to build and run the Sema playground on your machine

If you decide to use npm to build sema, you can follow this list of commands:

$ cd sema
$ npm install
$ npm run build
$ npm run dev

If you decide to go with the Yarn package manager instead, you can use the following list of commands:

To use Yarn:

$ cd sema
$ yarn
$ yarn build
$ yarn dev

Once you have sema running as a node application, you can load it on your browser on the following ports

Hardware acceleration:

Hardware acceleration will have a drastic effect in Tensorflow.js model training speed. To enable it in Chrome:

  • Navigate to chrome://settings
  • Click the Advanced ▼ button at the bottom of the page
  • In the System section, ensure the Use hardware acceleration when available checkbox is checked (relaunch Chrome for changes to take effect)

Linux Users

Sema uses Web Audio API Audio Worklets. Their performance seems very sensitive to CPU power scaling. If you are experiencing sound quality issues, try setting the CPU governor to performance mode. e.g on Ubuntu,

$ cpupower frequency-set --governor performance


Sema's reference documentation aims at supporting the users learning experience. It is integrated in the application and comprises the following elements:

Sema's Wiki documentation aims at supporting contributions. It focuses on how Sema is designed and built:


Sema is an open-source project and hopefully the underlying vision, aims and structure will motivate you to contribute to it. Check the following:


Bernardo, F., Kiefer, C., Magnusson, T. (2020). A Signal Engine for a Live Coding Language Ecosystem, J. Audio Eng. Soc., vol. 68, no. 10, pp. 756-766. doi:

Bernardo, F., Kiefer, C., Magnusson, T. (2020). Designing for a Pluralist and User-Friendly Live Code Language Ecosystem with Sema. 5th International Conference on Live Coding, University of Limerick, Limerick, Ireland

Bernardo, F., Kiefer, C., Magnusson, T. (2019). An AudioWorklet-based Signal Engine for a Live Coding Language Ecosystem. In Proceedings of Web Audio Conference 2019, Norwegian University of Science and Technology (NTNU), Trondheim, Norway (Best Paper Award at Web Audio Conference 2019)