Skip to content

Uses machine learning techniques to predict someones emotions using only their audio. This project was created as part of a fourth year university group project.

License

Notifications You must be signed in to change notification settings

hmajid2301/EmotionCommotion

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Developer Documentation

Build Guide

Requirements

  1. Install Python 3.5, apt-get install python3

  2. Install python3-tk, apt-get install python-tk

  3. Navigate to "EmotionCommotion/EmotionCommotion/" (where manage.py exists)

  4. In the command line (bash) run pip install -r requirements.txt

  5. Switch Keras backend to Theano from Tensorflow (Guide: https://keras.io/backend/)

  6. Install the whitenoise Django middleware class (Guide: http://whitenoise.evans.io/en/stable/)

  7. Run, pip install dj-database-url

  8. Run python manage.py runserver

Alternatively, install the requirments in a virtual environment for this project (http://python-guide-ptbr.readthedocs.io/en/latest/dev/virtualenvs/).The application usually runs on localhost:8000, in a web browser (Chrome).

User Documentation

To view a classification in real-time, the following steps can be followed:

  1. Visit the web page, found at http://localhost:8000 if the client is running on the same machine as the server. (If the server is running remotely, then the chrome -unsafely-treat-insecureorigin-as-secure flag must be set).

  2. Click the microphone button in the centre of the page to begin the recording and speak to the system. The waveform is displayed and the percentage of each emotion detected is displayed in a pie chart, which updates as more audio is recorded.

  3. Press the stop button to finish the recording. The dominant emotion identified throughout the clip is displayed in the form of an emoji.

Audio Processing Scripts

These scripts were used to aid in the creation of the YouTube dataset as described in the report.

audioChopper.py

‘audioChopper.py’ takes as command line arguments the file path to an audio file and the start and end seconds of each emotion within that audio file. Audio files are stored in the directory that the script is located along with a CSV file of the emotions. The notation is a python list (square brackets) containing one or more sublists with three elements, the start seconds, finish seconds and emotion number (where 0=neutral, 1=happy, 2=angry, 3=sad) in that order.

example: python audioChopper.py /audio/clip_1.wav [[1,3,0],[4,8,3],[13,20,1]]

audioReducer.py

‘audioReducer.py’ can be used to post process the files generated by ‘audioChopper.py’. Provide the path of the directory which stores WAV format files as command line arguments and ensure that all of the WAV files in that directory are to be included in the dataset. The script produces the complete dataset and a CSV file with emotions labels inside of the directory. example: python audioReducer.py /dcs/project/audio_chopper_output/

About

Uses machine learning techniques to predict someones emotions using only their audio. This project was created as part of a fourth year university group project.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •