Project Miles Ahead
Building off the TensorFlow model magenta, Project "Miles Ahead" attempts to expand Deep Learning Jazz Generation by enhancing the recurrent neural network "Lookback RNN".
In addition, through the use of Music21 and matplotlib, Project Miles Ahead will conduct Exploratory Data Analysis and robust Data Visualization to explore the music created by the model. Utilizing the same comparison on music created by legendary jazz pianist, Bill Evans, this project seeks insight in to the "humanity" of the computer generated music.
Project still in progress and will be updated over time.
###For a high level overview of the project take a look at the "Miles Ahead Presentaion" pdf file. For a deeper dive, head over to the Exploratory Data Analysis section.
Take a listen
generated using the modified TensforFlow magenta model using 13,500 steps/epochs.
A sample of the Mile's solo in score form:
Graphing the miles solo as keyboard patterns:
Miles solo rhythmic patterns as applied to a kmeans clustering:
Project Miles ahead was conducted in a remote server environment utilizing Linode and an Ubuntu framework. It is advised if replicating the project to utilize a similar infrastructure due to the processing requirements of the TensorFlow library.
Modify built in
midi_db.shon remote server:
midi_db.sh can be edited to utilize larger batch sizes if your processor can handle it. Also make sure to adjust
num_training_steps if you do not want to run 20,000 training steps.
Sit back and enjoy a fine beverage with one of your favorite jazz records - it's gonna be a while.
Hey! Wake up! Your models are done!
melody_creation.shto generate melodies.
melody_creation.sh can be edited to change location of midi files by changing
From magenta GitHub:
--primer_melody can be edited to specify a primer melody. The values in the list should be ints that follow the
melodies_lib.Melody format (-2 = no event, -1 = note-off event, values 0 through 127 = note-on event for that MIDI pitch). For example
primer_melody="[60, -2, 60, -2, 67, -2, 67, -2]" would prime the model with the first four notes of Twinkle Twinkle Little Star. Instead of using
--primer_melody, we can use
--primer_midi to prime our model with a melody stored in a MIDI file. For example,
--primer_midi=<absolute path to magenta/models/shared/primer.mid> will prime the model with the melody in the melody in that MIDI file. If neither
--primer_midi are specified, a random note from the model's note range will be chosen as the first one, then the remaining notes will be generated by the model."
About Lookback RNN
From magenta GitHub:
"Lookback RNN introduces custom inputs and labels. The custom inputs allow the model to more easily recognize patterns that occur across 1 and 2 bars. They also help the model recognize patterns related to an events position within the measure. The custom labels reduce the amount of information that the RNN’s cell state has to remember by allowing the model to more easily repeat events from 1 and 2 bars ago. This results in melodies that wander less and have a more musical structure."
Code was built while significantly referencing public examples from the Magenta documentation on GitHub: https://github.com/tensorflow/magenta