Source code for champion of micro emotion competition held on FG 2017.
Switch branches/tags
Nothing to show
Clone or download
Latest commit a92f244 Jun 5, 2018
Type Name Latest commit message Commit time
Failed to load latest commit information.
cnn Change Mar 5, 2017
crop_align Change Mar 12, 2017
data Change and add face_sheets.tex Mar 7, 2017
models Initial Mar 5, 2017
.gitignore Change fact_sheets.tex Mar 11, 2017
fact_sheets.pdf Change fact_sheets.tex Mar 11, 2017
fact_sheets.tex Change fact_sheets.tex Mar 11, 2017 Update Jun 4, 2018


If you use these models or code in your research, please cite:

  title={Multi-modality Network with Visual and Geometrical Information for Micro Emotion Recognition},
  author={Guo, Jianzhu and Zhou, Shuai and Wu, Jinlin and Wan, Jun and Zhu, Xiangyu and Lei, Zhen and Li, Stan Z},
  booktitle={Automatic Face \& Gesture Recognition (FG 2017), 2017 12th IEEE International Conference on},

  title={Dominant and Complementary Emotion Recognition from Still Images of Faces},
  author={Guo, Jianzhu and Lei, Zhen and Wan, Jun and Avots, Egils and Hajarolasvadi, Noushin and Knyazev, Boris and Kuharenko, Artem and Jacques, Julio CS and Bar{\'o}, Xavier and Demirel, Hasan and others},
  journal={IEEE Access},

For final evaluation


First, you should generate the crop and aligned data on test Chanllenge dataset. Just change to crop_align dir


The crop and aligned of test data(final evaluation phase data) of 224x224 will place in $ROOT/data/face_224

Then change to cnn dir, just type


It will load data preprocessed and caffe model to generate labels named predictions.txt and for test data. All details were considered.

In this repo, some directory path may be confused, just be careful, contact me if any questions occured.

The trained caffe model is just a experiement model, it may not has the best perfomance in this challenge.

Just upload to submit window then.


We use Dlib to do face and landmark detection, and use landmark to do face cropping and alignment, then we use Caffe to with landmark and cropping image to train a cnn model to do the face expression recognition task.



First, run to get all the origin image landmark, then build the crop_align binary, and run to get all the 224x224 size image.

Build crop_align

cd crop_align
mkdir build
cd build
cmake ..

All the preprocessed data except the images are in data dir.


Change to cnn dir, run to prepare training, validation and test data. Then run to start training.


Just run to generate the result, the input is the test image and its landmark offset info.


We use the landmark offset and image info to do this task. In detail, the landmark offset is calculated by substraction of 224x224 image landmark and each id's mean landmark, and we concact this feature to modified alexnet's last output feature. We change softmax loss to hinge loss to get a little better result.

More detail is in the fact_sheet.tex.