Skip to content

This repository is acting as a guardian of musical pieces, using a CNN and RNN trained with timbral features to analyze and classify songs according to my musical taste..

Notifications You must be signed in to change notification settings

IsitaRex/SongGuardian

Repository files navigation

SongGuardian 🎵

⚠️ UNDER DEVELOPMENT ⚠️

Music plays a great role in peoples life and influences different parts of the daily routine such as the mood or the performance in sports, study, work, etc. Therefore, in the contemporary era of information many researchers have dedicated their work to improving music recommendation systems that enable people to find new music or allow artists to reach a wider public.

This work emerges from the hypothesis that not all humans are sensitive to the same features and that the first question should be instead, what makes us like or dislike music?

Therefore, I decided to study the feature spaces generated by the songs I like the most and the ones I hate the most looking forward to detect which aspect of music is the one I am more sensitive to. Is it the timbre? the rhythm? a combination of both?. Although I am sure the context might influence a lot the emotions perceived in a musical piece, there is a pattern why friends and family tend to give accurate recommendations that lacks of any biases based on the context. This means they are capable of learning your musical taste, but are probably not able to describe it.

The aim of this repository is to implement Deep Learning models that are trained with timbral features to act as a "song guardian" for the user. This project originated from an artificial intelligence course I took during my bachelor's degree. After considering the appropriate learning considerations based on the Probably approximately correct learning (PAC), simple learning machines were trained and I discovered that timbre is the feature that best generalizes my musical taste when compared to rhythm, pitch, or dynamics (Notebook 🎹). The goal of this repository is to build on that knowledge and create a Deep Learning model that can learn from timbral features to make personalized music recommendations for IsitaRex.

Installation instructions 💻

To replicate the results follow this steps:

Install the requirements list either with pip

pip install -r requirements.txt

or create an environment

conda create --name musical_taste --file requirements.txt

Then activate your environment

conda activate musical_taste

Feature extraction 🎵

My dataset is a combination of two playlists
💿 Songs I love
💿 Songs I hate

You can build your own dataset and create a folder data with two subfolders named Like and Dislike. The first step is making all songs to have the same length

python .\data_processing.py 

To extract the features run:

python .\feature_extraction.py 

Models 🎵

RNN

To train a RNN with 100 epochs, a batch size of 32 and learning rate of 0.1 run:

python main.py --epochs 100 --batch_size 32 --lr 0.1 --task RNN

CNN

To train a CNN with 100 epochs, a batch size of 32 and learning rate of 0.1 run:

python main.py --epochs 100 --batch_size 32 --lr 0.1 --task CNN

About

This repository is acting as a guardian of musical pieces, using a CNN and RNN trained with timbral features to analyze and classify songs according to my musical taste..

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published