Skip to content

PedroUria/DL-Music_Generation

Repository files navigation

Music-Generation

This repo consists on the final proyect by PedroUria, QuirkyDataScientist1978 and thekartikay for our Neural Network class. Below is a description of its structure and instructions to use the code to train LSTM networks and/or generate music using our trained networks. You can also visit www.lstmmusic.com to listen to some samples and generate music by interacting with a simple UI. However, if you want to train your own models, you should stay here. We also have a playlist.

Structure

The repo is organized in various subdirectories. code contains all the code used in the project, while data contains all the data used to trained the networks. Group-Proposal contains the project proposal, while Final-Group-Presentation contains the presentation slides we used in class and Final-Group-Project-Report contains the report, which is the best source to understanding the project, although Final-Group-Presentation may be enough. Some of our results are located in generated-music.

Instructions

Dependencies

Software

  • Python 3.5.2: all the code is written in Python.

  • MuseScore: to open .mid files and show music scores using music21.

Python Packages

Basic
  • os: to navigate the different folders.
  • time: to time some processes.
  • random: for an optional feature on the generator functions.
External
  • music21, version 5.5.0: To read and write .mid files.
  • NumPy, version 1.15.0: To encode the .mid files.
  • PyTorch, version 0.4.1: to build and train the networks.
  • matplotlyb, version 1.5.1: to plot losses.

Some Notes

All the code in this project, apart from a Jupyter Notebook that served as the starting ground for getting familiar with music21, was run on Google Cloud Platform on an ubuntu instance via a custom image provided by our professor. To install music21 on this instance, and possibly on any ubuntu vm, you need to run python3 -m pip install --user music21 on the terminal. The ubuntu version was 16.04, code name xenial. However, by creating a virtual enviroment with the software, packages and versions listed above, there should not be any issues. The code will also run the networks on GPU automatically if you have access to one.

Training your own networks

If you want to experiment training your own networks on our data or any other data, you can use the scripts on code that are named as training_<something>.py. We found training_many_deff_two_voices_stacked.py to be the most successful in general. There are many hyperparameters you can play with inside these scripts, regarding the network arquitecture, the training process and the generation process. The scripts are written to take in data from data/classical, but some easy tweaks would allow to take in any other .mid files. You can read code/README.me and also the functions documentations, and refer to Final-Group-Project-Report/Music-LSTM.pdf and Final-Group-Presentation/slides.pdf to know what is going on.

Using our trained networks to generate new music

You can also use some of the models we saved in generated-music by following the instructions in this folder. The are two generator functions, with many hyperparameters, so you can obtain a lot of variations even when playing with the same model.