Build a deep learning model which detects the real time emotions of students through a webcam and gives the real time aggregated feedback to the instructor about the class so that instructor can understand if the students are able to grasp the topic according to students expression or emotions using CNN model. The data consists of 48x48 pixel grayscale images of faces. The faces have been automatically registered so that the face is more or less centred and occupies about the same amount of space in each image. The task is to categorize each face based on the emotion shown in the facial expression into one of seven categories (0=Angry, 1=Disgust, 2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutral). The training set consists of 28,709 examples and the public test set consists of 7,178 examples.
- model.h5 - Model contains information about the emotions of the train set, such as the Happy, Angry and so on.
- Video and pics - which contains the output of detecting emotions through live webcam.
- There are seven facial expression (0=Angry, 1=Disgust, 2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutral). The training set consists of 28,709 examples and the public test set consists of 7,178 examples.
- The CNN model was chosen because it had the highest accuracy 73.40 perent and Resnet model's accuracy was around 63.
- As a result, we save CNN model and use it to predict facial expressions.
- Since, the emotion counts of disgust and surprise images are less therefore on local webcam it hardly detect those emotions.
- Our model can successfully detect face and predict emotion on live feed as well as on Video.