Skip to content

System Overview

Mark Slee edited this page Apr 19, 2018 · 7 revisions


Envelop for Live (E4L) is an open source audio production framework for spatial audio composition and performance. Envelop for Live combines Ableton Live as a music production environment with a set of Max For Live devices performing acting as spatial effects processors and audio renderers. Envelop for Live is designed to be a highly modular, flexible platform for artists to compose and perform spatial audio, and for developers to create new kinds of audio effects for the Ambisonics domain.


Ambisonics was originally developed for 3D sound recording and reproduction on speakers evenly distributed over a virtual sphere. It is a set of techniques for reconstructing a completely immersive sound field that emulates the way we hear naturally. Our brain identifies the directionality and location of audio by detecting the subtle differences between sound waves as they arrive at each ear. Ambisonics models these psychoacoustic principles digitally to create the perception of sound directionality. Sound field reconstruction techniques that started in the 1970’s have evolved to a high-fidelity format called Higher-Order Ambisonics (HOA) that can virtually position sound in 3D space.

Wikipedia offers a good basic technical overview:

*Notes on nomenclature: Ambisonics has not fallen to set of standards and is referred to as multi-dimensional, 360 audio, 3D audio among others marketing terms. Other related terms are spherical surround sound, directional- and coordinate-based sound, spatial audio, sound field, etc.


Binaural recording and playback refers to a method of using a set of filters (called HRTFs) that simulates a spatial scene when content is played back using headphones. Because Ambisonics content can easily be transcoded into a stereo binaural signal, Envelop for Live can be used with headphones in a preview mode if an Ambisonic-capable array is not at hand. Since using Envelop for Live in binaural mode is the easiest way to get started, the first part of this guide is written for the binaural use case.

Ableton Live

Ableton Live is a digital audio workstation (DAW) created by in 1999 by Robert Henke and Gerhard Behles. With its innovative "session view" that allowed musicians to perform and remix their works live, Ableton Live quickly became one of the most popular pieces of music software for electronic musicians. Its "Max for Live" product, built in collaboration with Cycling ‘74, allows developers to create new effects for the Live platform. Envelop for Live takes full advantage of Live’s capabilities, and expands Live’s limitations of its stereo bus to support an Ambisonic rendering environment.

System Architecture

The Envelop For Live system consists of a set of Max For Live devices that encode audio into the 3rd order ambisonics domain and route this audio using Max For Live's internal routing capabilities. The ambisonics-domain audio may be decoded directly within Live to a binaural output for headphone monitoring, or to an arbitrary number of speakers. For installations with a large number of speakers, it is also possible to route the 16 channels of 3rd-order Ambisonics audio to a second machine for remote decoding.


The Envelop for Live 10 architecture was developed by Mark Slee and Rama Gottfried, making use of the open-source ICST Ambisonics Tools.

Version 1 of the Envelop for Live system was developed by Rama Gottfried, and was modeled on the architecture of Ircam-Spat combined with an odot port of Alex Harker's "Convolution Reverb Pro" Max For Live device adapted for use with B-Format impulse responses.

For more information on the signal processing architecture please see:

  • Jean-Marc Jot, "Efficient models for reverberation and distance rendering in computer music and virtual audio reality", IRCAM, 1997.
  • Markus Noisternig, Thomas Musil, et. al., "A 3D Real Time Rendering Engine for Binaural Sound Reproduction", ICAD, 2003.
  • Harker, Alexander and Tremblay, Pierre Alexandre, "The HISSTools Impulse Response Toolbox: Convolution for the Masses", ICMC, 2012.

For information on the CNMAT-odot system please see:

  • John MacCallum, Rama Gottfried, Ilya Rostovtsev, Jean Bresson, and Adrian Freed, "Dynamic Message-Oriented Middleware with Open Sound Control and Odot", ICMC, 2015.
You can’t perform that action at this time.