Detects face and its emotion, draws according emoji over face
Branch: master
Clone or download
PiotrDabrowskey Merge pull request #5 from Anandesh-Sharma/patch-1
removed semicolon and some redundant spaces
Latest commit b49f62b Dec 23, 2018
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
docs added source of OpenCVs setup instructions Jun 9, 2016
graphics new emoji graphics May 23, 2016
models Code refactoring: added found faces and coordinates of them to single… May 31, 2016
.gitignore Add data/* files to gitignore May 30, 2016
README.md
__init__.py added package file May 31, 2016
face_detect.py removed semicolon and some redundant spaces Dec 20, 2018
facemoji_screenshot.png screenshot_1 May 31, 2016
image_commons.py Updated project to work properly with OpenCV 3.4.3 and Python 3.6 (#4) Oct 30, 2018
prepare_model.py Updated project to work properly with OpenCV 3.4.3 and Python 3.6 (#4) Oct 30, 2018
process_dataset.py
webcam.py Updated project to work properly with OpenCV 3.4.3 and Python 3.6 (#4) Oct 30, 2018

README.md

hey, what's that?

It's just a bunch of python scripts. Together they:

  1. Harvest emotions dataset to extract faces from it in normalized way (same size, grey colours)
  2. Teach a fisherfaces classifier to classificate emotions
  3. Swap faces to emoticons in real-time (using video stream from a webcam)

what steps to follow to run it?

If you don't have a dataset you can use a model teached by us and start from step 4:

  1. Put CK emotions dataset inside /data/ folder following tips from Paul van Gent's blog post
  2. Run process_dataset.py. It harvests dataset and puts neutral and emotions images into /data/sorted_set/, it also normalizes them
  3. Run prepare_model.py to teach a model using /data/sorted_set/ files. You can specify list emotions you want to use. It saves a teached model to /models/emotion_detection_model.xml
  4. Run webcam.py. It opens a webcam stream, detect emotions on faces (using /models/emotion_detection_model.xml) and changes them to specified emojis (/graphics/)

what do i need to run it?

To see it in action you need:

  1. Python
  2. OpenCV
  3. Dataset to teach a model, but you can used one provided by us. It was teached on http://www.consortium.ri.cmu.edu/ckagree/

faq

Q: I have my own emotion dataset. How can I use it with these scripts?

A: You need to supply normalized images of faces with emotions. Use find_faces method from face_detect.py to find faces on your images. It returns cropped and normalized faces, save them to to /data/sorted_set/%emotion_name%. Then run step 3 to teach a model and step 4 to begin swapping faces from webcam.

Q: I want to use different emojis, for example my university profesors

A: Place your university profesors heads inside /graphics/ folder following filenames convetion (filename should be an emotion label)

screenshot

05:38

credits

We got the idea (and harvesting of files) to use Cohn-Kanade dataset to classificate emotions from Paul van Gent's blog post, thanks for that!