- Please, download pre-trained models' weights from https://www.kaggle.com/datasets/zhansayasovetbek/emotion-classification-pre-trained-models-weights to train the model.
- You can download datasets from https://www.kaggle.com/datasets/zhansayasovetbek/emotion-classification-head-direction-balanced
- https://docs.anaconda.com/free/anaconda/install/windows/
- https://pytorch.org/get-started/locally/
- Warning! Install Pytorch with CUDA.
- Facial emotion recognition has received increasing attention in recent years due to its potential applications in various fields such as human-computer interaction, security, and healthcare. In this context, the orientation of a face has been identified as an important factor affecting the accuracy of facial emotion recognition.
- Two methodological approaches were used in this research: the baseline model and the proposed model. All two models classify the face orientation directions and facial emotions. The models will use Hopenet to identify head pose direction angles such as pitch, yaw, and roll then to determine one of the directions, namely, forward, left, right, up, and down.
- Pre-trained models such as MobileNetV3-small, ResNet-18, GoogleNet, and others will be used to classify emotions and find the connection between facial emotion classification and head pose orientation.
- AffectNet(440K) images classified into ("happy", "sad", "surprise", "fear", "disgust", "anger", "contempt", "neutral") emotions.
- The Pointing’04 (15,000 images of people’s faces) from various perspectives
- The AFLW2000-3D (2,000 3D 18 annotated face photos)
There are two models: Baseline and Proposed. All of them used pre-trained models.
- HopeNet (for Head Pose Direction angles)
- MobileNetV3-small
- Googlenet
- ResNet-18
- VGG-16
- Alexnet
- AdaBoost
- Simple NN
Even though the percentage distribution of directions is almost the same, it can be
seen that some directions suppress others, and could predict emotion.