The most important files / directories are in bold. data and models are not included due to size. Data used for training can be requested from DAiSEE
GazeML code referenced in the appendix is available from github.
-
Whitepaper.pdf
PDF describing detailed methodology for project -
aws_setup.md & jetson_setup.md
step by step instructions on setting up aws and jetson for this project, including docker implementation, library installs and OpenCV compilation. -
Presentation.pdf
In class presentaion -
EDA.xls
Spreadsheet containing modeling results and some EDA -
tree.txt directory structure
Contains all code for the Project
-
cnn
Jupyter notebooks (in order) for the cnn code used to get data, organize and train models. -
rnn
Jupyter notebooks (in order) for the two LSTM models -
infer_class
Scripts to run inference. infer_dnn.py has the most complete code (argparse options, and MQTT) -
extract_frames.py
code to extract frames from videos, pass in FPS as integer argument
Output files from running demos / inference
-
infer_output
Contains subdirectories for each inference script, containing videos recorded inference/demo programs and report from dnn model (CNN) -
resuts_images
Contains images of classification matrices referenced to experiments described in EDA.xlsx -
gazeML.png
image extracted from runnign gaze_ml demo
Scripts to setup MQTT messaging on AWA and Jetson, using docker.