Skip to content

Voice Emotion Recognition of Audio (VERA) is an open-source project created for the Data Science track for the program CUNY Tech Prep (CTP) in Cohort 8. ๐Ÿ”Š

License

Notifications You must be signed in to change notification settings

GeorgiosIoannouCoder/vera

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

20 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

NOTE: The original GitHub repository of this project can be found here.

Voice Emotion Recognition of Audio ๐Ÿ”Š

INTRODUCTION

Waveform illustration

An audio classification project which takes audio files recorded from human speech, primarily in .wav format, and predicts the emotion conveyed from the voice.

DATASETS USED

  1. RAVDESS Emotional Speech Dataset on Kaggle

    This portion of the RAVDESS contains 1440 files: 60 trials per actor x 24 actors = 1440. The RAVDESS contains 24 professional actors (12 female, 12 male), vocalizing two lexically-matched statements in a neutral North American accent. Speech emotions includes calm, happy, sad, angry, fearful, surprise, and disgust expressions. Each expression is produced at two levels of emotional intensity (normal, strong), with an additional neutral expression.

  2. CREMA-D: Crowd Sourced Emotional Multimodal Actors Dataset

    CREMA-D is a data set of 7,442 original clips from 91 actors. These clips were from 48 male and 43 female actors between the ages of 20 and 74 coming from a variety of races and ethnicities (African America, Asian, Caucasian, Hispanic, and Unspecified). Actors spoke from a selection of 12 sentences. The sentences were presented using one of six different emotions (Anger, Disgust, Fear, Happy, Neutral, and Sad) and four different emotion levels (Low, Medium, High, and Unspecified).

  3. SAVEE: Surrey Audio-Visual Expressed Emotion

    The SAVEE database was recorded from four native English male speakers (identified as DC, JE, JK, KL), postgraduate students and researchers at the University of Surrey aged from 27 to 31 years. Emotion has been described psychologically in discrete categories: anger, disgust, fear, happiness, sadness and surprise. A neutral category is also added to provide recordings of 7 emotion categories.

    The text material consisted of 15 TIMIT sentences per emotion: 3 common, 2 emotion-specific and 10 generic sentences that were different for each emotion and phonetically-balanced. The 3 common and 2 ร— 6 = 12 emotion-specific sentences were recorded as neutral to give 30 neutral sentences. This resulted in a total of 120 utterances per speaker.

  4. TESS: Toronto Emotional Speech Set

    There are a set of 200 target words were spoken in the carrier phrase "Say the word _' by two actresses (aged 26 and 64 years) and recordings were made of the set portraying each of seven emotions (anger, disgust, fear, happiness, pleasant surprise, sadness, and neutral). There are 2800 data points (audio files) in total.

    The dataset is organized such that each of the two female actor and their emotions are contain within its own folder. And within that, all 200 target words audio file can be found. The format of the audio file is a WAV format

TECHNOLOGIES

  1. Librosa
  2. Numpy
  3. Pandas
  4. Seaborn
  5. Plotly
  6. Tensorflow/Keras
  7. Scikit-Learn
  8. Kaggle
  9. Streamlit

UI

User Interface Design illustration

NOTEBOOK VIEWER LINK

You can check the Jupyter Notebook for all the workings of the model here.

SETUP INSTRUCTIONS

You can view all the setup instructions depening on your operating system in the setup folder here. The file paths must follow the conventions found in the .env.txt file here.

SPECIAL MENTIONS

Vijay Anandan who helped coordinate with the ideation and guidance in the project.

CONTRIBUTION AND FEEDBACK

If you would like to contribute or have any feedback for this project please feel free to contact any one of the contributors. Moreover, if you need any of the files generated by the vera-notebook.ipynb please feel free to contact me.

CODE LICENSE

MIT License

Copyright (c) 2022 Georgios Ioannou, Hussam Marzooq, Alex Ruan

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

About

Voice Emotion Recognition of Audio (VERA) is an open-source project created for the Data Science track for the program CUNY Tech Prep (CTP) in Cohort 8. ๐Ÿ”Š

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages