Skip to content
Real time emotion recognition demo
Python
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
faces
models
emotion_detector.png
emotion_detector.py
evaluation.png
makepkl.py
model.h5
readme.md
train.py
video-demo.gif

readme.md

Real Time Facial Expression Classification

Video demonstration of CNN facial tracking and emotion classification.

Face perception is one of the most important daily cognitive tasks that humans perform. This is such an important function that facial expertise is even represented within the brain's modular layout.

Under neurotypical conditions this task is accomplished relatively easily, however, individuals with autism spectrum disorder or schizophrenia both have increased difficulty with face classification.

Advances in computer vision algorithms have resulted in the possibility of assisting these individuals with the task of face perception! This tech demo demonstrates how such methods could assist in emotion perception!

The provided model was trained on a set of 13312 48x48 grayscale face images evenly split across four emotion categories: angry, sad, happy, and neutral. These images were scraped from various stock image websites and sourced from psychophysics stimuli used in my academic research.

Example of emotion classification on multiple faces.

Face tracking in the live video demonstration is accomplished using dlib's CNN face classifier as it is extremely fast and far more robust than using Haar cascades. As demonstrated above, it handles multiple faces quite effectively!

Validation

The mathematician John von Neumann famously stated "With four parameters I can fit an elephant and with five I can make him wiggle his trunk." In other words, it is very easy to over-fit a model when working with a large number of parameters.

Loss and accuracy plots.

With this in mind, loss and accuracy are carefully monitored and training was stopped early at the fifth epoch since training beyond this point results in over-fitting.

Dependencies

You can use pip to install any missing dependencies.

Basic Usage

Before training, data must be preprocessed and serialized. This can be done by first placing all images into the appropriate labeled subdirectories of the faces directory. Once this is done, data can be serialized by running:

python makepkl.py

Once data has been serialized, training can begin by running:

python train.py

After choosing a model based on validation statistics (either by training your own or using the provided model), a demonstration of real-time emotion classification using your camera can be performed by running:

python emotion_detector.py

New Directions

Future plans include training on a larger data set of higher definition images.

Acknowledgements

This project was inspired by and conceptually based on atulapra's facial expression detection algorithm.

You can’t perform that action at this time.