Skip to content

SextonCJ/w251Project

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Engagement Detection from Video Capture

Homework Submission = Whitepaper.pdf

Folder and File Structure

The most important files / directories are in bold. data and models are not included due to size. Data used for training can be requested from DAiSEE

GazeML code referenced in the appendix is available from github.

This Directory

  • Whitepaper.pdf
    PDF describing detailed methodology for project

  • aws_setup.md & jetson_setup.md
    step by step instructions on setting up aws and jetson for this project, including docker implementation, library installs and OpenCV compilation.

  • Presentation.pdf
    In class presentaion

  • EDA.xls
    Spreadsheet containing modeling results and some EDA

  • tree.txt directory structure

src

Contains all code for the Project

  • cnn
    Jupyter notebooks (in order) for the cnn code used to get data, organize and train models.

  • rnn
    Jupyter notebooks (in order) for the two LSTM models

  • infer_class
    Scripts to run inference. infer_dnn.py has the most complete code (argparse options, and MQTT)

  • extract_frames.py
    code to extract frames from videos, pass in FPS as integer argument

report

Output files from running demos / inference

  • infer_output
    Contains subdirectories for each inference script, containing videos recorded inference/demo programs and report from dnn model (CNN)

  • resuts_images
    Contains images of classification matrices referenced to experiments described in EDA.xlsx

  • gazeML.png
    image extracted from runnign gaze_ml demo

messaging

Scripts to setup MQTT messaging on AWA and Jetson, using docker.

About

Public Version of Berkeley MIDS W251 Project

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published