This project was developed during HackInTheNorth4.0
The whole idea of this project is to provide an API for real-time analysis of human sentiments while they’re watching content like videos online. We aim to solve this problem as a deep-learning problem, deep learned models from large data, robust face alignment, and expected value formulation for age regression.
Find Demo on YouTube: Reaction.AI
- User watches video.
- Camera records face.
- Real-time Face Detection using WideResnet and simultaneous detection.
- Real-time Emotion Recognition using modified VGG-16 architecture.
- Data Analysis for each gender and age group, emotion wise.
FER2013: Data consists of 48x48 pixel grayscale images of faces. The task is to categorize each face based on the emotion shown in the facial expression in to one of seven categories (0=Angry, 1=Disgust, 2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutral). The training set consists of 28,709 examples.
IMDBDataset: We obtained 460,723 face images from 20,284 celebrities from IMDb and 62,328 from Wikipedia, thus 523,051 in total.
For Emotion Recognition, we trained a modified version of VGG16 architecture.
Accuracy Graph:
Loss Graph:
-
Training Accuracy: 90.50%
-
Validation Accuracy: 67.59%
-
Testing Accuracy: 66.17%
-
Inference Time: 0.4464 sec
This project is built on Python3
- Matplotlib
- TensorFlow
- Keras
- Django
- OpenCV
- py-agender
Laptop/Desktop Camera access is required
Following are some of the applications of Reaction.AI
- Age Approximation
- Detect Target Audience to produce better content
- Avoid Fake reviews
- Block offending videos
- Get how the content is reviewed within seconds of release
- Improving Accuracy of Emotion detection model
- Building an attractive DashBoard for easy and simple understanding
- Improving Accuracy of age prediction
- Generating an API compatible with various platforms
Feel free to Star, Fork and contribute to the Repo!