Skip to content

BradySheehan/wesad_experiments

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

51 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Stress Prediction with Limited Features

For this project, we are interested in using a RNN to predict when an individual reaches a state of stress or increased stress. The goal of this project is to become familiar with Keras for deep learning and the WESAD dataset.

Recently, we discovered the WESAD dataset (/references) that performed experiments while measuring data from 12 subjects in which they induced a stressful state using the TSST (Tier Social Stress Test). They recorded multiple physiological parameters in addition to acceleration information during the experiments. They attached two diferent devices to each individual; one wrist worn and one chest worn and collected multiple modalities for each.

This project considers the chest worn device with a subset of the total number of modalities.

Directions

If you don't have the sd card loaded with all of the data on the NVIDIA Jetson TX2, make sure that you point the DataManager.py and Notebooks to that data source.

After ensuring that the data is downloaded and referenced correctly, you can run either the demo.sh script or the Demo.py script to test out the solution.

The demo will prepare, create, and evaluate an LSTM NN with one epoch and also load and evalue an LSTM NN with 5 epochs. The demo prints the results from each of these for your comparison.

Features

This project is going to consider only 3 of the 6 sensor measurements from the chest device. These measurements are accelerometer data, temperature data (skin temperature), and electrodermal activity (EDA) aka galvanic skin response (GSR).

The features of interest for the ACC data are as follows.

  1. mean for each axis; summed over all axes
  2. STD for each axis; summed over all axes
  3. Peak frequency for x
  4. Peak frequency for Y
  5. Peak frequency for z

The features of interest for the temperature sensor are:

  1. Min value
  2. max value
  3. Dynamic Range
  4. Mean
  5. STD

The features of interest for the EDA data are:

  1. Min value
  2. max value
  3. Dynamic Range
  4. Mean
  5. STD

Project Structure and Development Process

Environment

There are two assumed paths to run the notebooks and the python module:

  • a path to the git project
  • a path to the WESAD dataset

Be sure that these are assigned appropriately for your environment

Dependencies

The project depends on multiple python libraries and packages. All of the code is written for Python3 and is using

  • pandas
  • matplotlib
  • numpy
  • keras
  • tensorflow
  • sklearn
  • ipython and jupyter for jupyter notebook

File Structure

/

  • demo.sh - driver that sets up, builds, trains, and tests model
  • readme.md
  • references/ - WESAD dataset and paper information
  • src/
    • src/main - Python3 modules
      • DataManager.py
      • Demo.py
    • src/notebooks - jupyter ipython notebooks
      • exploring-the-dataset.ipynb
      • feature-exploration.ipynb
      • feature-exploration-continued.ipynb
      • model-training.ipynb
      • demo
    • src/models - Directory for Keras model data files

Development Process

During the development process, trail and error experiments were performed inside the notebooks. Anytime a new python package/library was needed, it was imported and installed inside the notebook.

For each chunk of work that was completed, a functionwas added to the DataManager.py module. Along the way, there were some memory issues that could have been addressed better however all notebooks run as is.

Results

Performance using the LSTM based network architecture with one hidden layer has performed with an accuracy of ~ 97.7% on the validation set using 5 epochs.

Windows: Learning rate = 0.05 batch size = 2 With just one epoch, the model has results between ~80% and ~92% for accuracy on the validation data. Each epoch of training takes approximately 70 seconds without GPU acceleration. At 5 epochs, the model outperforms the WESAD quoated results for both accuracy and F1 using less modalities and less features.

Nvidia jetson TX2: batch_size = 32, 4, 2 .. performance seems pretty slow.., even when attempting to use Jetson TX2 with NVIDIA GPU support. The accuracy is less and not sure why yet. With a batch size of 4, and one epoch, we get results of approximately 82% validation accuracy. Still not as good as on windows.

Future Work

In the future, we would like to refactor the DataManager to do incremental data loading and feature calculations to improve performance.

About

Experimenting with WESAD using neural networks.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published