Facial Expression Analysis with AFFDEX and FACET: A Validation Study
The goal of this study was to validate AFFDEX and FACET, two software based algorithms to analyze emotional facial expressions. In Study 1, pictures of standardized emotional facial expressions from three databases, Warsaw Set of Emotional Facial Expression Pictures (WSEFEP), Amsterdam Dynamic Facial Expression Set (ADFES) and Radboud Faces Database (RaFD), were classified with both modules. Results show large variance in accuracy across emotions and databases with a performance advantage for FACET over AFFDEX. In Study 2, 110 respondents’ facial responses were measured while being exposed to emotionally evocative pictures from the International Affective Picture System (IAPS), Geneva Affective Picture Database (GAPED) and Radboud Faces Database (RaFD). Accuracy again differs for distinct emotions and results in better performance for FACET. Overall, iMotions can achieve acceptable accuracy for standardized pictures of prototypical (vs. natural) facial expressions, but only weak accuracy for more natural facial expressions. We discuss potential sources for limited validity and suggest research directions in the broader context of emotion research.
This repository contains the necessary code and data to replicate the study.
To be filled.
In order to replicate the findings of this validation study, each folder (
Study 1 &
Study 2) must be treated separately with the R working directory set to respective
R folder, run each of the numbered scripts in order. After execution the respective results are located in
We thank Adem Halimi and Elena von Wyttenbach for their help with the data collection.