Skip to content

Flask app to detect and classify user's emotion, age, gender using convolutional neural networks

Notifications You must be signed in to change notification settings

tamlthari/face-emotion-age-gender-classification

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Face Detection - Facial Expression+Age+Gender Classification

Libraries

tensorflow 2.0.0 opencv hog from skimage.feature dlib

Data

The age+gender model is trained on Baidu's All-Age-Faces (AAF). link here

The facial expression model is trained on Kaggle dataset FER2013 containing labeled 3589 test images, 28709 train images.

Model

  • For Facial Expression model, we use functional API with 4 convolutional layers with ReLU activation and padding 'same' for image input and another separate input layer for HOG (histogram of gradients) & landmarks features. See FER2013.ipynb notebook for more details about the model. We don't use transfer learning because our dataset contains greyscale images and doesn't fit in 3 channel pretrained models.

  • For Age and Gender We use the two pre-trained models Inception V3 and VGG16 for gender and age detections, respectively.
    • Inception V3 for gender detection

    • VGG16 for age detection

Training

Facial Expression training

  • The data from the Kaggle dataset is already split into train and test set. The data is processed from csv file into numpy arrays and fetched into ImageDataGenerator. See jupyter notebook FER2013.ipynb for more details.

  • Our model achieved ~99% accuracy on train set and ~75% on validation set after 21 epochs

  • We use this paper as benchmark which achieves 75.2% accuracy.

  • The two tables below show training result on 5 expressions done by amineHorseman. It shows that face landmarks and histogram of oriented gradients (HOG) only improves the accuracy by 4.5% at best. On the other hand, batch normalization significantly improves the model performance.

  • Using example code extract face landmarks using dlib shape predictor model

  • Example code get hog features using scikit image hog

  • Model performance:

Age and Gender training

  • The Baidu dataset contains 13322 face images (mostly Asian) distributed across all ages (from 2 to 80), including 7381 females and 5941 males. The orignal face images, facial landmarks and aligned face images are stored in folders original images, key points, and aligned faces.

  • We read from the key points folder and turn the data into a dataframe consisting of 3 columns: image_name, gender and age. From there we loop through the dataframe and assign the image paths for each correponding row and store them inside sub-folders which serve as the labels. Gender has two labels Male and Female while Age has 5 labels which are the age ranges: 1-15, 16-25, 26-35, 36-45, >46.

  • We both go through several pre-trained models such as MobileNet, Inception ResNet V2 and even Levi&Hassner's model, which was specifically trained for age and gender detection, but none seems to fit with the Baidu dataset.

  • See Age and Gender Notebooks for for information

Flask app

  • Our Flask app connects to the webcam which user can use to scan their faces for classification with the output as the user's age, gender and emotion.

Releases

No releases published

Packages

No packages published

Languages