Skip to content

Commit

Permalink
Scripts to load download scripts. And environment scripts to setup en…
Browse files Browse the repository at this point in the history
…vironement
  • Loading branch information
tesYolan committed Jun 2, 2018
1 parent d1ad354 commit e4ec540
Show file tree
Hide file tree
Showing 4 changed files with 66 additions and 2 deletions.
21 changes: 19 additions & 2 deletions README.md
@@ -1,6 +1,20 @@
# Emotion recognition from face features
This project is aimed to train model that detects emotion from face image.

## Install prerequisites
### Using conda
```
# with conda
conda env update
python get_models.py
# to activate environment
source activate emotion-recogntion-snet-agent
```
### Using pip
```
# to utilize pip
pip install -r requirements.txt
python get_models.py
```
## How to preprocess datasets
This proejct uses [CK+ dataset](http://www.consortium.ri.cmu.edu/ckagree/) and [Kaggle fer2013 dataset](https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data).
The dataset should be saved inside single directory which contains ```train``` and ```test``` folders.
Expand All @@ -22,6 +36,9 @@ The four inputs model can be trained by three steps
cd /path-to-project
wget "http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2"
bzip2 -d shape_predictor_68_face_landmarks.dat.bz2
# or
python get_models.py
```
Training program can be run using the following command
```
Expand Down Expand Up @@ -77,7 +94,7 @@ python singnet_wrap/run-snet-service
cd singnet_wrap
python test.py
```
####TODO
#### TODO
* Better data responses to queries. As it stands we just serialize the data as string. But it's better to utilize grpcs to have consistent message format to communicate with other services or just for single user.
* Daemon has yet to be tested on kovan testnet. That needs to proceed.

Expand Down
26 changes: 26 additions & 0 deletions environment.yml
@@ -0,0 +1,26 @@
name: emotion-recogntion-snet-agent
dependencies:
- Keras==2.1.5
- numpy==1.14.3
- protobuf==3.5.2
- ptyprocess==0.5.2
- Pygments==2.2.0
- pyparsing==2.2.0
- python-dateutil==2.7.3
- pytz==2018.4
- PyWavelets==0.5.2
- PyYAML==3.12
- scikit-image==0.13.1
- scipy==1.1.0
- simplegeneric==0.8.1
- six==1.11.0
- tensorboard==1.8.0
- tensorflow==1.8.0
- tqdm
- pip:
- snetd-alpha==0.1.0
- jsonrpcserver==3.5.4
- jsonrpcclient==2.5.2
- aiohttp==3.2.1
- opencv-python==3.4.1.15
- dlib
15 changes: 15 additions & 0 deletions get_models.py
@@ -0,0 +1,15 @@
import requests
import bz2
import math
from tqdm import tqdm

model = "http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2"

response = requests.get(model, stream=True)
total_size = int(response.headers.get('content-length', 0))
print(total_size)
decompressor = bz2.BZ2Decompressor()
with open('shape_predictor_68_face_landmarks.dat', 'wb') as f:
for data in tqdm(response.iter_content(1024), total=math.ceil(total_size//1024), unit='KB'):
f.write(decompressor.decompress(data))
print('Does this')
6 changes: 6 additions & 0 deletions requirements.txt
Expand Up @@ -51,3 +51,9 @@ traitlets==4.3.2
wcwidth==0.1.7
webencodings==0.5
Werkzeug==0.14.1
snetd-alpha==0.1.0
jsonrpcserver==3.5.4
jsonrpcclient==2.5.2
aiohttp==3.2.1
opencv-python==3.4.1.15
dlib

0 comments on commit e4ec540

Please sign in to comment.