Skip to content

Commit

Permalink
Reorganize repo
Browse files Browse the repository at this point in the history
  • Loading branch information
aperez-rai committed Dec 20, 2017
1 parent 43b3fa1 commit 7a08d09
Show file tree
Hide file tree
Showing 239 changed files with 169 additions and 862 deletions.
10 changes: 0 additions & 10 deletions Dockerfile

This file was deleted.

110 changes: 110 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
# FERPython
FERPython is a [Facial Expression Recognition (FER)](https://en.wikipedia.org/wiki/Emotion_recognition) python toolbox containing deep neural net classes that accurately predict emotions given facial expression images. These models can be trained on FER image datasets and used to predict emotions from new facial expression images.

![Labeled FER Images](readme_docs/labeled_images.png "Labeled Facial Expression Images")
*Figure from [@Chen2014FacialER]*

This is a community effort, initiated by the [ThoughtWorks Arts Residency](https://thoughtworksarts.io/) program in New York and based on the paper of Dr. Hongying Meng at Brunel University in London.

Our aim is to make this research available to all as a public, easy-to-use FER toolkit that you can use fairly easily out of the box. We are also looking to expand our development community and are open to contributions - you can [contact us](mailto:aperez@thoughtworks.com) to discuss.

## Datasets

As of this moment, in order to use this repository you will have to provide your own facial expression image dataset. We aim to provide pre-trained prediction models in the near future, but you can try out the system using your own dataset or one of the small datasets we have provided in the [image-data](image-data) subdirectory.

Predictions ideally perform well on diverse datasets, illumination conditions, and subsets of the standard 7 emotion labels (i.e. happiness, anger, fear, surprise, disgust, sadness, calm/neutral) seen in FER research. Some good example public datasets are the [Extended Cohn-Kanade](http://www.consortium.ri.cmu.edu/ckagree/) and [FER+](https://github.com/Microsoft/FERPlus).

-----

TODO: add ReadTheDocs link

TODO: rewrite run on sentences

TODO: Add diagrams to overview and explanation

TODO: Add how long it'll take to install and run a model

TODO: change audience
TODO: be more clear
We want the help of the open-source community to experiment with deep neural network architectures to improve their performance in the context of FER and develop a public, easy-to-use FER toolkit. We want to provide pre-trained prediction models for quick and easy experimentation and a top-layer class (FERModel) that can be used to train and/or predict by simply supplying an image dataset.

Note: Levy Rosenthal
wave sythesis - 3d measurement of emotion, arousal and ??


## Toolkit Overview

The library includes deep neural net classes in the neuralnets.py module that use image pre-processing, feature extraction, and regression classes/functions found in additional modules. Documentation can be found here.

## Installation

The first step is to clone the directory and open it in your terminal.

```
git clone https://github.com/thoughtworksarts/fer-python.git
cd fer-python
```

You will need to install Python 3.6.3. We recommend setting up a Python virtual environment using pyenv. Install pyenv with homebrew:

```
brew install pyenv
```

Next install Python 3.6.3 using pyenv and set it as the local distribution while in the fer-python directory:
```
pyenv install 3.6.3
pyenv local 3.6.3
```

Once Python 3.6.3 is set up, install all additional dependencies:

```
pip install -r requirements.txt
```

Now you're ready to go!

## Try Out Some Examples

You can find example code to run each of the current neural net classes in the [examples](examples) subdirectory. The example of the FERModel class below is so easy to use that all you have to supply is a set of target emotions and a data path. Eventually FERModel will choose the best-performing neural net based on the set of target emotions.

TODO: Clone this directory and cd into examples folder.

#### Example using FERModel:

```python
import sys
sys.path.append('../')
from fermodel import FERModel

target_emotions = ['anger', 'fear', 'neutral', 'sad', 'happy', 'surprise', 'disgust']
csv_file_path = "image_data/sample.csv"
model = FERModel(target_emotions, csv_data_path=csv_file_path, raw_dimensions=(48,48), csv_image_col=1, csv_label_col=0, verbose=True)
model.train()
```

TODO: When you run this example, you will see an output that looks like blah blah

## Guiding Principles

- __FER for Good__. FER applications have the potential to be used for malicious purposes. We want to build FERPython with a community that champions integrity, transparency, and awareness and hope to instill these values throughout development while maintaining an accessible, quality toolkit.

- __User Friendliness.__ FERPython prioritizes user experience and is designed to be as easy as possible to get an FER prediction model up and running by minimizing the total user requirements for basic use cases.

- __Experimentation to Maximize Performance__. Optimal performance in FER prediction is a primary goal. The deep neural net classes are designed to easily modify training parameters, image pre-processing options, and feature extraction methods in the hopes that experimentation in the open-source community will lead to high-performing FER prediction.

- __Modularity.__ FERPython contains four base modules (fermodel, neuralnets, imageprocessor, and featureextractor) that can be easily used together with minimal restrictions.

## Contributing

1. Fork it!
2. Create your feature branch: `git checkout -b my-new-feature`
3. Commit your changes: `git commit -am 'Add some feature'`
4. Push to the branch: `git push origin my-new-feature`
5. Submit a pull request :D

This is a new library that has a lot of room for growth. Check out the list of open issues that we need help addressing!


[@Chen2014FacialER]: https://www.semanticscholar.org/paper/Facial-Expression-Recognition-Based-on-Facial-Comp-Chen-Chen/677ebde61ba3936b805357e27fce06c44513a455 "Facial Expression Recognition Based on Facial Components Detection and HOG Features"
6 changes: 0 additions & 6 deletions RIOTRuntime/EmotionEnum.py

This file was deleted.

15 changes: 0 additions & 15 deletions RIOTRuntime/FacialRecognitionAPI.py

This file was deleted.

17 changes: 0 additions & 17 deletions RIOTRuntime/FacialRecognitionUtil.py

This file was deleted.

Binary file removed RIOTRuntime/images/Greyscale.jpg
Binary file not shown.
Binary file removed RIOTRuntime/images/Mickey_Mouse.png
Binary file not shown.
Binary file removed RIOTRuntime/images/testimage.png
Binary file not shown.
11 changes: 0 additions & 11 deletions RIOTRuntime/input_sources/InputInterface.py

This file was deleted.

1 change: 0 additions & 1 deletion RIOTRuntime/main.py

This file was deleted.

10 changes: 0 additions & 10 deletions data/fer2013/README

This file was deleted.

19 changes: 0 additions & 19 deletions data/fer2013/fer2013.bib

This file was deleted.

26 changes: 0 additions & 26 deletions data/temp.py

This file was deleted.

File renamed without changes.
14 changes: 6 additions & 8 deletions experiments/convLstmNNtest.py → examples/convolutional_lstm.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
import sys
sys.path.append('../data')
sys.path.append('../fer')
sys.path.append('../')
from imageprocessor import ImageProcessor
from neuralnets import TransferLearningNN, TimeDelayNN, ConvolutionalLstmNN
from neuralnets import ConvolutionalLstmNN
from featureextractor import FeatureExtractor
import numpy as np
from skimage import color, io

time_delay = 1
raw_dimensions = (48, 48)
Expand All @@ -17,7 +17,7 @@

print('--------------- Convolutional LSTM Model -------------------')
print('Collecting data...')
csv_file_path = "../data/fer2013/fer2013.csv"
csv_file_path = "image_data/sample.csv"
imageProcessor = ImageProcessor(from_csv=True, target_labels=target_labels, datapath=csv_file_path, target_dimensions=target_dimensions, raw_dimensions=raw_dimensions, csv_label_col=0, csv_image_col=1, channels=1)
images, labels = imageProcessor.get_training_data()
if verbose:
Expand All @@ -26,7 +26,6 @@
print('Extracting features...')
featureExtractor = FeatureExtractor(images, return_2d_array=True)
featureExtractor.add_feature('hog', {'orientations': 8, 'pixels_per_cell': (16, 16), 'cells_per_block': (1, 1)})
#featureExtractor.add_feature('lbp', {'n_points': 24, 'radius': 3})
raw_features = featureExtractor.extract()
features = list()
for feature in raw_features:
Expand All @@ -40,6 +39,5 @@
validation_split = 0.15

print('Training net...')
net = ConvolutionalLstmNN(target_dimensions, channels, target_labels, time_delay=time_delay)
net.fit(features, labels, validation_split)

model = ConvolutionalLstmNN(target_dimensions, channels, target_labels, time_delay=time_delay)
model.fit(features, labels, validation_split)
6 changes: 3 additions & 3 deletions experiments/fermodel_test.py → examples/fermodel_example.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
import sys
sys.path.append('../fer')
sys.path.append('../')
from fermodel import FERModel

target_emotions = ['anger', 'fear', 'calm', 'sad', 'happy', 'surprise']
csv_file_path = "../data/fer2013/fer2013.csv"
target_emotions = ['anger', 'fear', 'neutral', 'sad', 'happy', 'surprise', 'disgust']
csv_file_path = "image_data/sample.csv"
model = FERModel(target_emotions, csv_data_path=csv_file_path, raw_dimensions=(48,48), csv_image_col=1, csv_label_col=0, verbose=True)
model.train()

0 comments on commit 7a08d09

Please sign in to comment.