Skip to content

MelihGulum/Facial-Emotion-Recognition

Repository files navigation

Facial-Emotion-Recognition

The challenge is consist of two part. The first part making a prediction of 5 classes with any kind of AI model. The second part is that insert desired values into database. Such as rectangle’s corner, loaded image’s path and so on.

1. Dataset:

I used the FER2013 dataset fort his challenge. It consists of 7 class. These are “Angry”, “Disgust”, “Fear”, “Happy”, “Neutral”, “Sad”, “Suprise”. FER2013 is a well studied dataset and has been used in ICML competitions and several research papers. It is one of the more challenging datasets with human-level accuracy only at 65±5% and the highest performing published works achieving 75.2% test accuracy I drop two class (Disgust and Suprise). Afterwards I assigned Sad and Angry as “NOT Okay”. On the other hand the other 3 class assigned as “Okay”.

You can see the test and training data numbers in the figure below.

2. AI:

Tradionally CNN is for mostly used for Image Processing and I kept this tradition. My model is similiar with VGG architecture. Before training I generate some artificial data with the help of ImageDataGenerator. Then, set some training parameters like batch size (64) and epoch (100). Final test accuracy is 68.5±3%.

3. Outputs:

  • You can make real-time predictions with FER_video.py.
  • You can guess the facial emotions of uploaded image with the FER_image.py. In addition, FER_image.py creates a separate text file and prints its predicted class and the coordinates of the detected face to txt file.

Here are some test examples:

drawing drawing drawing drawing drawing

4. Database:

In this challenge I used MYSQL database. The table has 6 column and these are “id”, “path”, “coordinates”, “state”, “time”, “class_name”.

CREATE TABLE `fer` (
 `id` int(6) UNSIGNED NOT NULL,
 `path` varchar(50) NOT NULL,
 `coordinates` varchar(50) NOT NULL,
 `state` varchar(50) NOT NULL,
 `class_name` varchar(50) DEFAULT NULL,
 `time` varchar(50) DEFAULT NULL
)

5. How to Run

  1. Fork this repository.
$ git clone https://github.com/MelihGulum/Facial-Emotion-Recognition.git
  1. Load the dependencies of the project

NOTE: This dependencies not including the Deep Learning part. Colab meet all dependencies (such as tensorflow).

pip install -r requirements.txt
  1. Now you can run FER_video.py or FER_image.py. But if you want you can run .ipynb and you can build your own model. It is up to you.