Skip to content


Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

Universal audio synthesizer control with normalizing flows

This repository hosts code and additional results for the paper Universal audio synthesizer control with normalizing flows. You can check out the video demonstration of the FlowSynth on Youtube

Installing the flow synthesizer plugin

In order to try out the Flow synthesizer plugin, you must

  1. Have an installed version of the Diva VST (The system works with the free tryout version but will produce noise every now and then). For simplicity, please ensure that it is located here
  1. Install the latest (bleeding-edge) version of both the Bach and Dada library for MaxMsp
  2. Install the Mubu library for MaxMsp
  3. Have an updated version of Python 3.7
  4. Install the Python dependencies by running the following line at the root of this folder
$ pip install -r requirements.txt
  1. Put the plugin/flow_synth.amxd device inside a MIDI track in Ableton Live
  2. Optionally, if you happen to have a LeapMotion sensor, you can install the Leap framework to enjoy it with the synth.
  3. ???
  4. Profit

NB: If the device seems non-responding, you can try to run the server manually

$ cd code && python

Note that the plugin has only been tested on MacOS X High Sierra (10.13.6)

Supporting webpage

For a better viewing experience, please visit the corresponding supporting website.

It embeds the following:

  • Supplementary figures
  • Audio examples
    • Reconstruction
    • Macro-control learning
    • Neighborhood exploration
    • Interpolation
    • Vocal sketching
  • Real-time implementation in Ableton Live

You can also directly parse through the different sub-directories of the main docs directory.


The dataset can be downloaded here:




Code has been developed with Python 3.7. It should work with other versions of Python 3, but has not been tested. Moreover, we rely on several third-party libraries, listed in requirements.txt. They can be installed with

$ pip install -r requirements.txt

As our experiments are coded in PyTorch, no additional library is required to run them on GPU (provided you already have CUDA installed).


For people interested in the research aspects of this repository, if you want to try new models or evaluate variations of the existing ones, you will need at one point to render the correponding audio. We rely on the great RenderMan library to batch generate audio output from synthesizer presets.


The code is mostly divided into two scripts and The first script allows to train a model from scratch as described in the paper. The second script allows to generate the figures of the papers, and also all the supporting additional materials visible on the supporting page) of this repository. arguments

Pre-trained models

Note that a set of pre-trained models are availble in the code/results folder.

Models details

As discussed in the paper, the very large amount of baseline models implemented did not allow to provide all the parameters for reference models (which are defined in the source code). However, we provide these details inside the documentation page in the models details section


Universal audio synthesizer control learning with normalizing flows







No releases published


No packages published