Skip to content

jvgalvin/posture

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Posture Detection from a Frontal Camera

Sitting for prolonged periods of time with bad posture can result in serious spinal and health complications. While certain chairs and cushions promote good posture, they are expensive and not easily transported. We present four machine learning models that are capable of detecting good and bad posture from a user-facing webcam. We show that using existing pose estimation models to extract body keypoints from single frames and feeding these latent representations through a fully-connected classification head produces more accurate results than fine-tuning existing image recognition models. We also show that three dimensional feature extraction is more accurate than two dimensional extraction. Two of these models have been quantized, deployed on an NVIDIA Jetson Xavier NX device, and include a user-facing chime to correct sustained (60s) bad posture.

Demonstrations

MoveNet (with GPU)

movenet_recording.mov

MoveNet (Jetson Xavier)

W251_Posture_Correction.mp4

MediaPipe (with GPU)

mediapipe_rm.mov

Steps to Use MoveNet

  1. Clone this repository

    git clone https://github.com/lindseyBang/W251-Final-Project.git
    
  2. If you are deploying to a Jetson Xavier NX, build the Docker container for MoveNet. Otherwise, simply navigate to the scripts directory and run the corresponding python file. Step 3 is not needed if you are running locally.

    ## For deployment on Jetson
    
    cd movenet_model_file
    docker build -t movenet -f Dockerfile.nvidia .
    
    ## For local use
    
    cd scripts && python movenet_loop.py
    
  3. Run the run.sh script (Jetson only)

    $ sh run.sh
    

About

Detection of good and bad seated posture

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages