In this repo, we tried to implement the paper, "CNN-based Facial Affect Analysis on Mobile Devices". The paper is add to the repo and you can download it.
Dataset is provided in this link:
https://drive.google.com/file/d/1MknXcvOW7FhQrtLWYJkti6MwvZBkwWgu/view
Since data is not balanced so we used the data augmentation to balance the classes. In the the dataset we have 8 different classes: ["anger", "contempt", "disgust", "fear", "happy", "neutral", "sad", "suprise"] We rotated(20 degrees), did the translation with respect to X and Y axis to % 10 percent and flipped images with repect to X axis.
Architucture of the AlexNet is as below:
We trained our model to 50 iterations, and the loss and accuracy plot are as below :
VGGNet Architecture is as below:
We trained our model to 10 iterations, and the loss and accuracy plot are as below :
VGGNet Architecture is as below:
We trained our model to 25 iterations, and the loss and accuracy plot are as below :
if you have any questions, reach out to me via email: t.morovati.99@gmail.com
@article{DBLP:journals/corr/abs-1807-08775, author = {Charlie Hewitt and Hatice Gunes}, title = {CNN-based Facial Affect Analysis on Mobile Devices}, journal = {CoRR}, volume = {abs/1807.08775}, year = {2018}, url = {http://arxiv.org/abs/1807.08775}, eprinttype = {arXiv}, eprint = {1807.08775}, timestamp = {Mon, 13 Aug 2018 16:47:27 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1807-08775.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }