Skip to content

Here is my Bachelor's Degree Thesis, Music and Feelings: A Deep Learning Approach to Emotional Composition

Notifications You must be signed in to change notification settings

alexnaiman/OrpheussSorrow

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OrpheussSorrow

Here is my Bachelor's Degree Thesis, Music and Feelings: A Deep Learning Approach to Emotional Composition

Orpheus's Sorrow is the name of the web application in which one can generate a piece of music based on a given emotion, through its valence and arousal.

Abstract

Year after year, the music industry sets new records on their revenue, earning billions of dollars through different mediums, from LP records to CDs, and with the rise of the Internet, through streaming platforms. This growth can be associated with the ease of producing and releasing new songs, processes that have been facilitated through different tools such as Digital Audio Workstations and VirtualStudio Technology plugins. Even with all these advancements, the approach one has when composing did not change in the last hundreds of years: composing based on inspiration with a combination of trial and error corrected by music theory.

This thesis introduces a tool designed for easing music composition, flattening the steep learning curve of music theory, by helping to compose songs based on feelings. The main goal of the application is to provide the user with a way of generating songs based on some emotion given data. The user provides the input emotion values, and the software should respond with a composed song in which one can recognize the given emotion.

The application consists of two parts: the frontend web application with which the user interacts, and the backend on which our machine learning model is running. For the emotion data, the valence-arousal classification was used, which specifies that any feeling can be expressed through the two aforementioned values. For the composing algorithm, the autoencoders were used for their capabilities of learning the internal structure of the data, then used it for generating new songs based on new input.

This work is the result of my activity. I have neither given nor received unauthorized assistance on this work

For more info you can read the contents of the paper here