A cute multi-layer LSTM network that can perform like a human
If you wish to learn more about my findings, then please read my blog post and paper:
Iman Malik, Carl Henrik Ek, "Neural Translation of Musical Style", 2017.
You will need a few things in order to get started.
The Piano Dataset
I created my own dataset for the model. If you wish to use the Piano Dataset
How to Run
python main.py -current_run <name-of-session> -bi
-load_last : Loads and continues from last epoch.
-load_model: Loads specified model.
-data_dir : Directory of datasets.
-data_set : Dataset name.
-runs_dir : Directory of session files.
-forward_only : For making predictions (not training).
-bi : If you wish to use bi-directional LSTMs. (HIGHLY recommended)
pianoify.ipynb : This was used to ensure the files across the dataset were consistent in their musical properties.
generate_audio.ipynb : This was used to make predicitions using StyleNet and generate the audio.
convert-format.rb : This was used to convert format 1 MIDIs into format 0.
file_util.py : This contains folder/file-handling functions.
midi_util.py : This contains MIDI-handling functions.
model.py : StyleNet's Class.
data_util.py : For shuffling and batching data during training.