Skip to content

Speech Emotion Detection using SVM, Decision Tree, Random Forest, MLP, CNN with different architectures

License

Notifications You must be signed in to change notification settings

PrudhviGNV/Speech-Emotion-Recognization

Repository files navigation

Speech Emotion Recognition using machine learning

LICENCE.mdAsk Me Anything !Open Source Love svg2 Awesome Badges

python-mini-project-speech-emotion-recognition-1280x720


Overview:

  • This project is completely based on machine learning and deep learning where we train the models with RAVDESS Dataset which consists of audio files which are labelled with basic emotions.
  • This project is not just about to predict emotion based on the speech. and also to perform some analytical research by applying different machine learning algorithms and neural networks with different architectures.Finally compare and analyse their results and to get beautiful insights.


Intro ..

As human beings speech is amongst the most natural way to express ourselves.As emotions play a vital role in communication, the detection and analysis of the same is of vital importance in today’s digital world of remote communication. Emotion detection is a challenging task, because emotions are subjective. There is no common consensus on how to measure or categorize them.


check out my Medium blog for quick intuition and understanding


Dependencies:

  • python

  • librosa

  • soundfile

  • numpy

  • keras

  • sklearn

  • pandas


Project Details

The models which were discussed in the repository are MLP,SVM,Decision Tree,CNN,Random Forest and neural networks of mlp and CNN with different architectures.

  • utilities.py - Contains extraction of features,loading dataset functions

  • loading_data.py - Contains dataset loading,splitting data

  • mlp_classifier_for_SER.py - Contains mlp model code

  • SER_using_ML_algorithms.py - Contains SVM,randomforest,Decision tree Models.

  • Speech_Emotion_Recognition_using_CNN.ipynb - Consists of CNN-1d model


NOTE : Remaining .ipynb files were same as above files but shared from google colab.

Dataset Source - RAVDESS


In this project, I use RAVDESS dataset to train.

s1


You can find this dataset in kaggle or click on below link.
https://www.kaggle.com/uwrfkaggler/ravdess-emotional-speech-audio

2452 audio files, with 12 male speakers and 12 Female speakers, the lexical features (vocabulary) of the utterances are kept constant by speaking only 2 statements of equal lengths in 8 different emotions by all speakers. This dataset was chosen because it consists of speech and song files classified by 247 untrained Americans to eight different emotions at two intensity levels: Calm, Happy, Sad, Angry, Fearful, Disgust, and Surprise, along with a baseline of Neutral for each actor.

protip : if you are using google colabs. Use kaggle API to extract data from kaggle with super fast and with super ease :)

Data preprocessing :

The heart of this project lies in preprocessing audio files. If you are able to do it . 70 % of project is already done. We take benefit of two packages which makes our task easier. - LibROSA - for processing and extracting features from the audio file. - soundfile - to read and write audio files in the storage.

The main story in preprocessing audio files is to extract features from them.

Features supported:

  • MFCC (mfcc)
  • Chroma (chroma)
  • MEL Spectrogram Frequency (mel)
  • Contrast (contrast)
  • Tonnetz (tonnetz)

In this project, code related to preprocessing the dataset is written in two functions.

  • load_data()
  • extract_features()

load_data() is used to traverse every file in a directory and we extract features from them and we prepare input and output data for mapping and feed to machine learning algorithms. and finally, we split the dataset into 80% training and 20% testing.

def load_data(test_size=0.2):
  X, y = [], []
  try :
    for file in glob.glob("/content/drive/My Drive/wav/Actor_*/*.wav"):
          # get the base name of the audio file
        basename = os.path.basename(file)
        print(basename)
          # get the emotion label
        emotion = int2emotion[basename.split("-")[2]]
          # we allow only AVAILABLE_EMOTIONS we set
        if emotion not in AVAILABLE_EMOTIONS:
              continue
          # extract speech features
        features = extract_feature(file, mfcc=True, chroma=True, mel=True)
          # add to data
        X.append(features)
        y.append(emotion)
  except :
       pass
    # split the data to training and testing and return it
  return train_test_split(np.array(X), y, test_size=test_size, random_state=7)


Below is the code snippet to extract features from each file.

def extract_feature(file_name, **kwargs):
  """
  Extract feature from audio file `file_name`
      Features supported:
          - MFCC (mfcc)
          - Chroma (chroma)
          - MEL Spectrogram Frequency (mel)
          - Contrast (contrast)
          - Tonnetz (tonnetz)
      e.g:
      `features = extract_feature(path, mel=True, mfcc=True)`
  """
  mfcc = kwargs.get("mfcc")
  chroma = kwargs.get("chroma")
  mel = kwargs.get("mel")
  contrast = kwargs.get("contrast")
  tonnetz = kwargs.get("tonnetz")
  with soundfile.SoundFile(file_name) as sound_file:
      X = sound_file.read(dtype="float32")
      sample_rate = sound_file.samplerate
      if chroma or contrast:
          stft = np.abs(librosa.stft(X))
      result = np.array([])
      if mfcc:
          mfccs = np.mean(librosa.feature.mfcc(y=X, sr=sample_rate, n_mfcc=40).T, axis=0)
          result = np.hstack((result, mfccs))
      if chroma:
          chroma = np.mean(librosa.feature.chroma_stft(S=stft, sr=sample_rate).T,axis=0)
          result = np.hstack((result, chroma))
      if mel:
          mel = np.mean(librosa.feature.melspectrogram(X, sr=sample_rate).T,axis=0)
          result = np.hstack((result, mel))
      if contrast:
          contrast = np.mean(librosa.feature.spectral_contrast(S=stft, sr=sample_rate).T,axis=0)
          result = np.hstack((result, contrast))
      if tonnetz:
          tonnetz = np.mean(librosa.feature.tonnetz(y=librosa.effects.harmonic(X), sr=sample_rate).T,axis=0)
          result = np.hstack((result, tonnetz))
  return result
 

Let's drive further into the project ..


Training and Analysis:

Traditional Machine Learning Models:

Performs different traditional algorithms such as -Decision Tree, SVM, Random forest .

Refer

Finds that these algorithms don’t give satisfactory results. So Deep Learning comes into action.

Deep Learning:

implements classical neural network architecture such as mlp Refer

Found that Deep learning algorithms like mlp tends to overfit to the data. So the preferred neural network is CNN which is a game changer in many fields and applications. Wants to perform some analysis to find the best CNN architecture for available dataset. Here CNN with different architectures is trained against the dataset and the accuracy is recorded. Here every architecture has same configuration and is trained to 500 epochs.

Refer

Visualization.

for better understanding about data and also for visualizing waveform and spectogram of audio files. Refer



Conclusion and Analysis :

  • Neural networks performs better than traditional classical machine learning models in maximun cases ( by compare metrics)

  • Since Deep learning models are data hunger .. They tend overfit the training data. (if we keep on training the model. we get 95% + accuracy :) )

  • CNN architectures performs better than traditional neural network architectures. (cnn in most cases perform better than mlp under same configuration)

  • CNN with different architectures with same configuration , with same learning rate, with same number of epochs also have vast difference in the accuracy (from Speech_Emotion_Recognition_using_CNN.ipynb )



Hope you like this project :)


LICENSE:

MIT


Contact:

About

Speech Emotion Detection using SVM, Decision Tree, Random Forest, MLP, CNN with different architectures

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published