Navigation Menu

Skip to content

ben-hayes/southbank_ai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Humans & Machines @ Southbank Centre

This repository contains the tools developed and used in a residency and performance by Hector Plimmer and Ben Hayes at the Purcell Room at London's Southbank Centre.

The performance is/was (depending on when you read this) focusing on the interaction between humans and so-called AI in a live music setting.

The handful of tools here are designed to allow quick and flexible interaction with a handful of deep learning models from within the context of Ableton Live. In our case, this is to allow us to develop a performance in which we improvise alongside such models.

These tools, and the code behind them, are very experimental and were hacked together alongside conceiving a creative performance. They are not well coded. They're full of bugs and in need of a big refactor (seriously, it's a mess in there). If you want a set of robust in-DAW tools for interacting with compositional AI, look no further than Google's Magenta Studio. However, if you want to have a little more flexibility, get a little more under the hood, produce some unexpected results, and most importantly jam with AI in "realtime", these tools might be for you.

Further massive disclaimer: as these were built alongside a creative project under immense time pressure, very little attention was paid to the ineraction design, so they are fiddly to work with. If they do not function as expected, feel free to drop me an email at ben@benhayes.net and I will try to help you get them working.

A few acknowledgements are necessary — the core of the deep learning here is Google Magenta's awesome MusicVAE model. And furthermore, none of this could have been achieved were it not for their excellent JS ports of their models (a shoutout also to TensorFlow.js). Also, big gratitude to Cycling '74 for Node for Max, which indirectly makes GPU accelerated deep learning inside of Ableton Live possible.

The tools

There are three main components to this suite:

VAE Subspacer

Image of 2D VAE Subspacer

This loads an instance of MusicVAE, initialised from a checkpoint (Magenta have a number of pretrained ones here). It can then train a second, smaller VAE to reconstruct the MusicVAE latent codes of given training examples, creating an explorable subspace of (hopefully) musically connected ideas.

There are 2-dimensional and 4-dimensional versions, with the 2D one offering pretty visualisations of the training examples projected into its latent space.

Image of 4D VAE Subspacer

It is possible to save and load models — if a model is saved or loaded, this is saved with the Ableton Live set, meaning it is restored when the project is open.

Sequence Responder

Image of Sequence Responder

This constructs a simple CNN which learns to predict VAE Subspacer latent codes given a bar of MIDI. Essentially this is a learned translation invariant feature extractor tacked onto a simple linear regression model. In combination with the musical closeness (ideally) of the VAE Subspacer latent space, this generally results in loosely similar input creating loosely similar output, meaning that it's possible to "jam" musical ideas with this system, and play off one another.

The device is designed to work in combination with the MIDI Capture device (described below), and so responds in realtime to incoming streams of MIDI.

MIDI Capture

Image of MIDI Capture

This real-time quantises incoming MIDI to 16th notes, and sends it globally throughout the Ableton Live set to the named receiver. This is designed to work with the Sequence Responder for real-time musical interaction.

About

AI music plugins for Ableton Live used in a human-AI performance at London's Southbank Centre

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published