Real Time Facial Expression Classification
Advances in computer vision algorithms have resulted in the possibility of assisting these individuals with the task of face perception! This tech demo demonstrates how such methods could assist in emotion perception!
The provided model was trained on a set of 13312 48x48 grayscale face images evenly split across four emotion categories: angry, sad, happy, and neutral. These images were scraped from various stock image websites and sourced from psychophysics stimuli used in my academic research.
Face tracking in the live video demonstration is accomplished using dlib's CNN face classifier as it is extremely fast and far more robust than using Haar cascades. As demonstrated above, it handles multiple faces quite effectively!
The mathematician John von Neumann famously stated "With four parameters I can fit an elephant and with five I can make him wiggle his trunk." In other words, it is very easy to over-fit a model when working with a large number of parameters.
With this in mind, loss and accuracy are carefully monitored and training was stopped early at the fifth epoch since training beyond this point results in over-fitting.
You can use pip to install any missing dependencies.
Before training, data must be preprocessed and serialized. This can be done by first placing all images into the appropriate labeled subdirectories of the faces directory. Once this is done, data can be serialized by running:
Once data has been serialized, training can begin by running:
After choosing a model based on validation statistics (either by training your own or using the provided model), a demonstration of real-time emotion classification using your camera can be performed by running:
Future plans include training on a larger data set of higher definition images.
This project was inspired by and conceptually based on atulapra's facial expression detection algorithm.