Skip to content

A machine learning model for recognising emotion from sound files

Notifications You must be signed in to change notification settings

anweasha/Speech-Emotion-Recognition

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 

Repository files navigation

Speech-Emotion-Recognition

Voice often reflects underlying emotion through tone and pitch. Based on this fact, Speech Emotion Recognition (SER) has been developed, which is the task of recognizing the emotional aspects of speech irrespective of the semantic contents. As such, in this python project I have tried building a model which will be able to recognize emotion from sound files.

Dataset

RAVDESS dataset

  • This dataset has 7356 files rated by 247 individuals 10 times on emotional validity, intensity, and genuineness.

Some libraries used particularly for this project:

  • librosa
  • pyaudio
  • soundfile

Steps

  1. Loading the dataset
  2. Extracting features from it
  3. Splitting the it into train and test sets
  4. Initializing an MLPClassifier
  5. Training the model
  6. Calculating the accuracy of the model

Releases

No releases published

Packages

No packages published