Skip to content

aayushi12/thesis_dss

Repository files navigation

Emotion recognition in a model of visually grounded speech

The following code can be used to replicate the results of my thesis: "Emotion recognition in a model of visually grounded speech" written for the partial fulfilment of Master's in Data Science and Society, Tilburg University.

The experiments of this thesis were carried out using the code released by Merkx, Frank,and Ernestus (2019) as reference. Depending on the requirement, some modifications were made to their existing code. The emotional speech classification part was coded by me.

The code involves the usage of pre-trained networks Resnet- 152 (He et al. 2016) which are made freely available in PyTorch.

Sources for data are:

  1. flickr_audio: https://groups.csail.mit.edu/sls/downloads/flickraudio/
  2. Flickr8k_Dataset: https://machinelearningmastery.com/develop-a-deep-learning-caption-generation-model-in-python/
  3. dataset.json: https://cs.stanford.edu/people/karpathy/deepimagesent/
  4. RAVDESS: https://zenodo.org/record/1188976
  5. TESS: https://tspace.library.utoronto.ca/handle/1807/24487
  6. CREMA-D: https://github.com/CheyneyComputerScience/CREMA-D

Releases

No releases published

Packages

No packages published

Languages