Skip to content

Emotion recognition of Speaker's Speech Data. Employ speaker detection classifiers for emotion recognition, a multiclass classification problem. Emotion Classes: Happy, Sad, Neutral, Relaxed and Angry

Notifications You must be signed in to change notification settings

ashishmd/AudioMLProject3

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

#Audo based ML Project 3: Emotion Recognition

Employ speaker detection classifiers for emotion recognition, a multiclass classification problem. VAD is conducted like in speaker detection: as a preprocessing step to filter out non-speech frames. The established baseline method is again MAP-adaptation of a general GMM, the UBM. Instead of using speaker-specific enrollment data to adapt the UBM to a speaker model, we now adapt the UBM with emotion-specific data (of multiple speakers).

Goals

  • Familiarize yourself with emotion corpora (structure and annotation)
  • Mix corpora with noise, convolve with IR
  • Extract MFCCs
  • Modify your speaker detection classifiers for speaker-independent emotion recognition
  • Present your results

About

Emotion recognition of Speaker's Speech Data. Employ speaker detection classifiers for emotion recognition, a multiclass classification problem. Emotion Classes: Happy, Sad, Neutral, Relaxed and Angry

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 92.2%
  • Shell 7.8%