Skip to content

VinayVirraj/Visual-sentiment-prediction

Repository files navigation

Visual-sentiment-prediction

About the dataset

The FER-2013 dataset consists of 28,000 labeled images in the training set, 3,500 labeled images in the development set, and 3,500 images in the test set. Each image in FER-2013 is labeled as one of seven emotions: happy, sad, angry, afraid, surprise, disgust, and neutral, with happy being the most prevalent emotion, providing a baseline for random guessing of 24.4%.

The images in FER-2013 consist of both posed and unposed headshots, which are in grayscale and 48x48 pixels.

The FER-2013 dataset was created by gathering the results of a Google image search of each emotion and synonyms of the emotions.

Using a pre-trained model

We will use DeepFace. DeepFace is a deep learning facial recognition system created by a research group at Facebook. It identifies human faces in digital images. The program employs a nine-layer neural network with over 120 million connection weights and was trained on four million images uploaded by Facebook users. A CNN model is trained and compared with DeepFace for analysis.

About

Facial Emotion Recognition using DeepFace

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published