Project title : Biosignal processing for automatic emotion recognition
In this project we will be using multimodal datasets found from open data sources with the goal of using biosignal processing and machine learning techniques to perform automatic emotion recognition. The project will be performed using Python and high performance computing to build a classifier of discrete emotional states. We will make good use of data visualization and will be sharing our progress through a notebook for anyone interested.
Right now, we're a team of 2 people (Danielle and Achraf), but other fellow BrainHack School 2020 participants are very welcome to participate if their interests fit!
Danielle:
I’m a master’s student working on the development of a device for students on the autism spectrum. The project envisions a device that can address students’ auditory sensitivities by filtering out distressing classroom sounds in real-time. I plan to use machine learning techniques for the following 2 components of the project:
- Audio event classification for the detection of identified classroom sounds
- Biosignal-based automatic emotion recognition for the detection of sound-induced distress
Assuming that #2 is where I can make the most of the expertise of fellow BrainHack school participants, I thought I could focus on biosignal processing for automatic emotion recognition during the project weeks.
Achraf:
I am currently studying Biomedical Engineering at Polytechnique Montréal (M.Eng) and am enrolled in the Brain Hack School 2020 adventure for growing my technical skills as well as networking. I have been modestly introduced to NeuroImaging before enrolling in the biomedical engineering path and would like to broaden my knowledge of the field and sharpen my skills in it.
My goal through the Brain Hack School 2020 is to learn as much as I can about modern ways of doing NeuroImaging, to improve my python skills through a hands-on multi-disciplinary project, and to exchange information and expertise with the other participants.
The project idea is from Danielle with whom I will be collaborating.
Psychological stress has been found to be associated with changes in certain biosignals. Features extracted from these biosignals have increasingly been used for predicting an individual's emotional state, including the EEG alpha asymmetry index, heart rate variability, and skin conductance response..
Tools and techniques we plan to use:
- High-performance computing: Compute Canada
- Preprocessing and feature extraction with Python
- Data visualization with Python
- GitHub
- Python Virtual Environment
So far, we have access to the following emotion-correlated biosignal databases:
- The MAHNOB-HCI-Tagging database
- Biosignal data: EEG (in the form of BDF files), ECG, respiration amplitude, and skin temperature
- Emotion data: Rating on valence-arousal scale provided by participants, and for some of the data, emotion tags (e.g. amused) selected by participants
- DREAMER: A Database for Emotion Recognition through EEG and ECG Signals from Wireless Low-cost Off-the-Shelf Devices
- Biosignal data: EEG and ECG
- Emotion data: Rating on valence-arousal-dominance scale provided by participants
- An EEG dataset recorded during affective music listening
- Biosignal data: EEG
- Emotion data: Rating on valence-arousal scale provided by participants
- AffectiveROAD system and database to assess driver's attention.
- Biosignal data: BVP, EDA, ECG, respiration rate, skin temperature
- Emotion data: "Stress metric" provided by observing experimenter
At the end of this project, we plan to complete:
-
Data preprocessing and feature extraction for at least one biosignal using Compute Canada.
→ Python scripts
→ Job files
-
Visualization of the relationship between the extracted features and the emotion data.
→ Python scripts
→ Image files included in report/notebook
-
Training of a classifier to predict the emotion data using Compute Canada.
→ Python scripts
→ Evaluation of classifier performance
→ Job files