Emotion Classifier - based on Evidence Theory by Dempster-Shafer
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.gitignore
E_019b.csv
LICENSE
Makefile
README.md
csvreader.cpp
csvreader.hpp
dempstershafer.cpp
dempstershafer.hpp
learningclassificator.cpp
learningclassificator.hpp
main.cpp
test_csvreader.cpp
test_data.csv
test_dempstershafer.cpp
test_learningclassificator.cpp

README.md

belief-function

Emotion Classifier - based on Evidence Theory by Dempster-Shafer

Abstract

With formal logic, every event can be either true or false. Bayesian Probability Theory extends this idea and allows us to assign a likelihood of being true to every possible event. Dempster-Shafer-Theory (DST) further generalizes this idea by:

  1. allowing the assignment of probability masses to sets of events rather than single events and
  2. providing methods to combine such distributions originating from different sources.

A common use case is the fusion of sensor values or measurements.

Another example where different measurements must be combined is the classification of emotions based on pictures. Digital image processing methods can be used to extract several features from pictures of human faces, e.g. mouth opening, eye aperture and number of furrows. These features allow the assessment of the emotion shown by the photographed face.

The goal of this exercise is to develop a emotion classifier that reads a list of the above mentioned features and uses DST to find the most plausible emotion shown in the picture. Pre-extracted example data of 50 pictures is provided for this exercise - the extraction of the features from actual images is not in scope.

Table of contents

Architectural and Technical Overview

Build and Execution

Invocation Example:

Output Example:

API Documentation

Test and Validation

Summary

Architectural and Technical Overview

The project is realized in C++ because it provides the flexibility to create widely usable libraries and the performance needed for video applications. No external libraries are used. The application logic is distributed over several parts - each one of them is reusable on it’s own.

Following APIs are provided:

dempstershafer.hpp/cpp

Contains all the Dempster-Shafer-Logic like creating evidences, combining evidences and calculating beliefs and plausabilities.

learningclassificator.hpp/cpp

Contains the logic used to classify if values are large or small. It uses online learning to adjust the classification with the classified values.

csvreader.hpp/cpp

Contains the logic needed to read .csv data files.

The Dempster-Shafer Library is designed to be very fast and to provide an easy to use and easy to read API at the same time. First, a DempsterShaferUniverse object is created and all hypotheseses are added to it. The API uses “untyped” pointers (void-pointers) as hypotheseses, so every kind of object can be used. All comparisons rely only on the memory address. The DempsterShaferUniverse can then be used to create instances of the Evidence class. These are used to add focal hypothesis-sets and to perform all other operations. Details can be found in the API documentation.

To detect emotions in videos, several features must be classified to be either large or small. The basic approach taken in this project, is to determine the mean of all values for a feature and classify values larger than the mean as large and values lesser than the mean as small. Because it’s impossible to determine correct means upfront in live-video-applications and because the values could change with perspectives, subjects, etc, the classifier also includes a simple form of online learning, which adjusts the means with every value that is classified.

To deal with the fact that a very large or a very small value is more significant than one near to the mean, the classifier returns a value between -1.0 (much lesser than mean) and +1.0 (much larger than mean). This value is also used as mass for the corresponding evidence. A multiplicative bias is added to the value to never allow 1.0 as mass, so the omega set is always contained and the conflict of combining two measure reduced.

The CSV Reader class offers a simple API to read information out of a .csv file with a specific structure. The delimiter of the file has to be a semicolon. The first row of the file has to contain the header information. The rest of the file are treated as the actual data. The data has to be convertible to the integer format. When creating a new CSVReader object the file is loaded and parsed so that the API calls are executed fast and without accessing the file itself.

Build and Execution

If the make utility is available on the system, simply executing

make all

in the project folder will build the main executable and all tests.

To compile the main project without make, main.cpp, dempstershafer.cpp, csvreader.cpp and learningclassificator.cpp must be compiled and linked together.

make creates an executable named lab_exercise. It can be run from command line and receives one file with test data as argument. The format of the data file must be similar to the two provided files (which are included in the project folder as well).

The project executable iterates over all frames in the data file and classifies the displayed emotion. The output is commented and should be self-explanatory.

Details about how the classification works can be found in the API documentation of dempstershafer.hpp.

Invocation Example:

project_folder$ ./lab_exercise test_data.csv

Output Example:

---------------------------------

### Frame: 046 ###

---------------------------------

(-1.0: far below average, +1.0 far above average)

Eye Aperture: 14 -> -0.19

Mouth Opening: 12 -> -0.58

Furrow Count: 443 -> -0.25

---------------------------------

(#: Belief, -: Plausability, .: nothing)

Fear | -------------.....................................

Surprise | -------------.....................................

Disdain | ----------------..................................

Disgust | ####-----------------------------------...........

Anger | ##########--------------------------------........

---------------------------------

classified as: anger

---------------------------------

API Documentation

The three application parts introduced above provide complete API documentation in the corresponding C++-header-files.

Test and Validation

The functionality of the three reusable APIs of the project are tested with unit tests. The interaction of the parts is tested by the lab exercise itself.

The Dempster-Shafer Library unit test uses an Dempster-Shafer exercise of an old exam. It creates a set of supects which are added to the universe. Based on witness statements focal and omega sets are created. The reference value are taken from the exam with a maximum variance of 0.001.

The Learning Classificator unit test creates a new classifier with an learning rate of 0.1. A feature with the value 100 is added. To test the classification a feature with the value of 100 is classified which results in zero variance. Then the online learning is tested by classifying two features with the value of 90. Each classification changes the average so that the two classifications don’t return the result. Then the cap is being tested by classifying a 1.0 (which results in an variance of -1.0) and a 10000 (which results in a variance of 1.0).

The CSV Reader unit test uses the provided test_data.csv file to test the reading of a .csv file. The test assumes that the test_data.csv file is unmodified. First the test checks the correctness of the number of columns and rows. Then the correctness of the header is being tested. Then the extraction of the first row and the fetching of a value via it’s columns name is tested.

To test the system as a whole, we calculated the results manually and then compared the values of the program with our manually evaluated results.

Summary

The solution is fully functional and provides reasonable results for the given data. Yet, it may perform even better by adding more features and fine-tuning the learning rate of the classifier, the initial averages and the bias for evidence masses. To do this however, more preclassified test-data would be needed.