This repository provides an implementation of paper, Face Tells Detailed Expression: Generating Comprehensive Facial Expression Sentence through Facial Action Units.
Python 2.7
(code has been tested with this version)- Download vgg_face_weights and put it in the project.
- Download pycocoevalcap and put it after unzipping to here.
- Text-based dataset with comprehensive facial expression sentence is available here.
- We provide the code that is executable in CK+ dataset.
- We have cooked the dataset based on the format below. We provide several examples in here.
data_root
├── Dataset/CK+ (or any other speaker-specific folder)
| ├── cohn-kanade-images/ (will contain the aligned video image frames)
| ├── FACS/ (will contain the facial action units info.)
| ├── Emotion/ (will contain emotion info.)
Please cite the following paper if you have use this code:
@inproceedings{hong2020face,
title={Face Tells Detailed Expression: Generating Comprehensive Facial Expression Sentence Through Facial Action Units},
author={Hong, Joanna and Lee, Hong Joo and Kim, Yelin and Ro, Yong Man},
booktitle={International Conference on Multimedia Modeling},
pages={100--111},
year={2020},
organization={Springer}
}