Skip to content
Neural network driven MIDI generator
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.

RBM Toadofsky

The composer, Toadofsky, consorting with his tadpole companions

Toadofsky is a minor character in the classic SNES role-playing game, Super Mario RPG. He is a composer that is constantly having trouble coming up with inspiration for new music.

RBM Toadofsky is a python program named after this fictional composer that trains and uses a neural network to generate novel MIDIs after training on a dataset of provided MIDI files. Such a tool would have been invaluable to the fictional Toadofsky!

A restricted Boltzmann machine (RBM) is used to generate short MIDI sequences. An RBM is a neural network with two layers: one visible and one hidden. Each visible node is reciprocally connected with each hidden node with no visible-visible or hidden-hidden connections. Each visible node takes MIDI data at a given time, multiplies by a weight, and then outputs to the hidden layer. Gibbs sampling of this RBM is used for MIDI generation.

Included here are two networks trained on some interesting datasets: the entire VGMusic piano MIDI database and midis of Chopin's Mazurkas.


You can use pip to install any missing dependencies.

Basic Usage

First, a model must be trained on your dataset of interest. Here, I've provided a MIDI dataset consisting of Chopin's Mazurkas. I've also provided models that are already trained on this set of data as well as one trained on all the piano MIDIs available on! These models are named chopin and todofsky, respectively.

The training file,, contains several user-editable variables that should be changed to fine tune your model. A new model can be trained by specifying which directory contains your training midis (the MIDIDIR variable) and running:


The music generating script,, also contains several user-editable variables. These include variables that change the MIDI instrument, timing, and musical structure. A saved model can be used for MIDI generation by specifying which model to use (the MODELDIR and MODEL variables) followed by running:


To get music that sounds pleasing, remember that you may have to fiddle around with both training and generation variables.


This project was inspired by and conceptually based on dshieble's neural network music generator which was, in turn, based on Boulanger-Lewandowski et al. (2012)

You can’t perform that action at this time.