This repository contains the code and documentation for the Music Emotion Recognition project. The project aims to develop an advanced model for recognizing emotional nuances within classical music compositions using deep learning techniques, particularly focusing on transfer learning and attention mechanisms.
-
Data/
: This directory contains the curated dataset of classical music compositions, including instrument-specific emotional samples. -
preprocessing/
: Here you will find the Python scripts used to preprocess and split the data. -
training/
: This folder contains the files for original code used to train the pre-trained model -
notebooks/
: Contains the python notebooks to analyse and visualise the code -
results/
: This folder contains the outcome of the experiments, including performance metrics, graphs, and visualizations. -
pre-trained model/
: Contains the pre-trained model used for the project original repo can be found here: https://github.com/minzwon/self-attention-music-tagging/tree/master -
main.py
: Contains the Python to script to run experiments and build model for transfer learning -
README.md
: You're currently reading this file! It provides an overview of the project, its contents, and instructions on how to use the code.
- Clone the repository:
https://github.com/dancingninjas1/emotion-recognition.git cd emotion-recognition
- The main.py script serves as a central point to run experiments and build models for transfer learning.
- Check the results/ directory for the outcomes of your experiments. Here, you'll find performance metrics, graphs, and visualizations that showcase the effectiveness of different strategies.
- Utilize the Jupyter notebooks available in the notebooks/ directory to unnderstand the analysis, data distribution and evaluation case study for the project
I would like to express my gratitude to Dr. Johan Pauwels for his invaluable guidance and support throughout this project.
This project is licensed under the MIT License.