Skip to content


Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

Fusion of Music Styles Using LSTM Recurrent Neural Networks


This repo is the result of a collaboration between Jacob Sundstrom, Harsh Lal, Dave DeFillipo, and Nakul Tiruviluamala in a class entitled "Machine Learning and Music". We each expressed interest in extracting "musical features" from music and recombining them to form new, fused musical works. A paper was written and submitted.

The abstract is pasted below.


Appeal of a musical composition is almost exclusively subjective in that it is a combination of the tastes, preferences, and history of an individual's experiences. That is, it is perceived and judged qualitatively in a different way by different individuals. In this project we propose to build a deep learning system which could take n different samples of a jazz soloist - especially a variety of samples of specific 'styles' - and generate sound using current input as well as feedback and memory from the past samples. This generation can then be judged by a 'human' agent and the parameters of the neural network could be adjusted accordingly to generate a fusion music that is more closer and appealing to agent's expectations. Recurrent neural networks with Long Short Term Memory (LSTMs) in particular have shown promise as a module that can learn long songs sequences, and generate new compositions based on the song's harmonic structure and the feedback inherent in the network.


  • Jacob Sundstrom, Department of Music, UCSD
  • Harsh Lal, Computer Science and Engineering, UCSD