Skip to content
No description, website, or topics provided.
Jupyter Notebook
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
HAR_LSTM_New - Jupyter Notebook.pdf
HAR_LSTM_New.ipynb
README.md

README.md

HUMAN-ACTIVITY-RECOGNITION

This project is to build a model that predicts the human activities such as Walking, Walking_Upstairs, Walking_Downstairs, Sitting, Standing or Laying. This dataset is collected from 30 persons(referred as subjects in this dataset), performing different activities with a smartphone to their waists. The data is recorded with the help of sensors (accelerometer and Gyroscope) in that smartphone. This experiment was video recorded to label the data manually. got data from here

How data was recorded:

Video Source: https://www.youtube.com/watch?v=XOEN9W05_4A&feature=youtu.be

By using the sensors(Gyroscope and accelerometer) in a smartphone, they have captured '3-axial linear acceleration'(tAcc-XYZ) from accelerometer and '3-axial angular velocity' (tGyro-XYZ) from Gyroscope with several variations. The sensor signals (accelerometer and gyroscope) were pre-processed by applying noise filters and then sampled in fixed-width sliding windows of 2.56 sec and 50% overlap (128 readings/window). The sensor acceleration signal, which has gravitational and body motion components, was separated using a Butterworth low-pass filter into body acceleration and gravity. The gravitational force is assumed to have only low frequency components, therefore a filter with 0.3 Hz cutoff frequency was used. From each window, a vector of features was obtained by calculating variables from the time and frequency domain.

Attribute Information:

For each record in the dataset it is provided:

  • Triaxial acceleration from the accelerometer (total acceleration) and the estimated body acceleration.
  • Triaxial Angular velocity from the gyroscope.
  • A 561-feature vector with time and frequency domain variables.
  • Its activity label.(WALKING, WALKING_UPSTAIRS, WALKING_DOWNSTAIRS, SITTING, STANDING, LAYING)
  • An identifier of the subject who carried out the experiment.

Results:

With Handcoded 561 Features and Machine Learning Algorithms
Algorithm Test Accuracy %
Logistic Regression 96.3
Linear SVC 96.5
rbf SVM classifier 96.27
DecisionTree 86.39
Random Forest 91.08
GradientBoosting DT 92.63
With Raw Series data and Deep Learning model

image

References:

  1. Deep Learning Models for Human Activity Recognition by machinelearningmastery.com
  2. Applied Ai Course
  3. Divide and Conquer-Based 1D CNN Human Activity Recognition Using Test Data Sharpening paper
You can’t perform that action at this time.