Skip to content

Latest commit

 

History

History
33 lines (18 loc) · 2.65 KB

README.md

File metadata and controls

33 lines (18 loc) · 2.65 KB

Extracting gait metrics from videos using convolutional neural networks

Training and inference scripts for predicting gait parameters from video. We use OpenPose to extract trajectories of joints

Cerebral Palsy (CP) gait post-operative

Implementation of algorithms for: "Deep neural networks enable quantitative movement analysis using single-camera videos" by Łukasz Kidziński*, Bryan Yang*, Jennifer Hicks, Apoorva Rajagopal, Scott Delp, Michael Schwartz

Online demo

Try an online demo at gaitlab.stanford.edu

Run demo locally

To test our code follow this notebook. To run the demo you'll need a computer with an NVIDIA GPU, NVIDIA docker, and Python 3.7.

Training

To train neural networks from scratch, using our large dataset of preprocessed videos, use training scripts from this directory. To run the training code you need a computer with a GPU and Python 3.7.

Download the dataset used in this project

License

This source code is released under Apache 2.0 License. Stanford University has a pending patent on this technology, please contact authors or Stanford's Office of Technology Licensing for details if you are interested in commercial use of this technology.

Our software relies on OpenPose under a custom non-commercial license. Other libraries are under permissive open source licenses. For specific licenses please refer to maintainers of packages listed in our requirements file. If you intend to use our software it's your obligation to make sure you comply with licensing terms of all linked software.

Processed video trajectories available here are available under CC BY-NC 2.0 license.

The original video file used in the demo is provided by courtesy of Gillette Children's Specialty Healthcare and should not be used for anything else than testing this repository without a written permission from the hospital.