Skip to content
master
Switch branches/tags
Go to file
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
 
 
 
 
 
 
 
 
 
 

README.md

Fusion of Music Styles Using LSTM Recurrent Neural Networks

Background

This repo is the result of a collaboration between Jacob Sundstrom, Harsh Lal, Dave DeFillipo, and Nakul Tiruviluamala in a class entitled "Machine Learning and Music". We each expressed interest in extracting "musical features" from music and recombining them to form new, fused musical works. A paper was written and submitted.

The abstract is pasted below.

Abstract

Appeal of a musical composition is almost exclusively subjective in that it is a combination of the tastes, preferences, and history of an individual's experiences. That is, it is perceived and judged qualitatively in a different way by different individuals. In this project we propose to build a deep learning system which could take n different samples of a jazz soloist - especially a variety of samples of specific 'styles' - and generate sound using current input as well as feedback and memory from the past samples. This generation can then be judged by a 'human' agent and the parameters of the neural network could be adjusted accordingly to generate a fusion music that is more closer and appealing to agent's expectations. Recurrent neural networks with Long Short Term Memory (LSTMs) in particular have shown promise as a module that can learn long songs sequences, and generate new compositions based on the song's harmonic structure and the feedback inherent in the network.

Contributors

  • Jacob Sundstrom, Department of Music, UCSD
  • Harsh Lal, Computer Science and Engineering, UCSD

About

Fusion of Music Styles Using LSTM Recurrent Neural Networks

Topics

Resources

License

Releases

No releases published

Packages

No packages published