This Repo contains all files related to the creation of this blog post:
This is intended as a proof of concept for the live capture of emotional analytics from your audience while in a video meeting
- The file is too large to be uploaded to Github directly.
- Instead it can be downloaded here:
- https://www.kaggle.com/ashishpatel26/facial-expression-recognitionferchallenge
- This is a colab notebook designed to be run on their free GPU's.
- In this notebook, I build, train, and export the model.
- These are the saved model files for the verison I implement.
- You will need this if you plan to run the .py locally.
- Standard haar cascade file, used in the .py file.
- This py file should be run from the terminal of the machine you want to screen capture.
- This will create a window and live analytics of the 7 emotions measured.
- Currently set to 4 max faces, you can change this in the file.
- I tuned the window locations for my machine with two monitors, so placement may be different on a different setup.
- This will output a comma separated text file of the captured results.
- This is a sample report generated from running the .py file.
- This is a short notebook generating some reporting from the results.
- These images can be seen in the blog post mentioned above.