Skip to content

Latest commit

 

History

History
11 lines (6 loc) · 976 Bytes

README.md

File metadata and controls

11 lines (6 loc) · 976 Bytes

Music Generation using Magenta

Task of music generation consists of iteratively generating a new sequence by predicting the next notes from an input sequence at each iteration.

We have used Magenta. Magenta is distributed as an open source Python library, powered by TensorFlow. This library includes utilities for manipulating source data (primarily music and images), using this data to train machine learning models, and finally generating new content from these models.

Magenta uses a symbolic representation of a music file called MIDI Representation. Here music is represented in terms of composition and harmony. RNN is used for music generation as it works on a sequence of vectors, and thus the i/p and o/p sizes can be arbitrary values.

Note: Following project is done as a semester course project for the ECE-UY 4563 Machine Learning course at NYU taught by Prof. Sundeep Rangan.

Done By: Siddharth Choudhary (sc7530) and Kshitija Patel (kap676)