Skip to content
an API for pose estimation
Branch: master
Clone or download
Latest commit d3a3f80 May 22, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
deepposekit
examples Initial release May 22, 2019
.gitignore Initial release May 22, 2019
LICENSE Add README and License May 5, 2019
README.md Update README May 22, 2019
setup.py Initial release May 22, 2019

README.md

DeepPoseKit: an API for pose estimation

You have just found DeepPoseKit.

DeepPoseKit is a high-level API for 2D pose estimation with deep learning written in Python and built using Keras and Tensorflow. Use DeepPoseKit if you need:

  • tools for annotating images or video frames with user-defined keypoints
  • a straightforward but flexible data augmentation pipeline using the imgaug package
  • a Keras-based interface for initializing, training, and evaluating pose estimation models
  • easy-to-use methods for saving and loading models and making predictions on new data

DeepPoseKit is designed with a focus on usability and extensibility, as being able to go from idea to result with the least possible delay is key to doing good research.

DeepPoseKit is currently limited to individual pose esimation, but can be extended to multiple individuals by first localizing and cropping individuals with additional tracking software such as idtracker.ai, pinpoint, or Tracktor.

Check out our preprint to find out more.

Note: This software is still in early-release development. Expect some adventures.

How to use DeepPoseKit

DeepPoseKit is designed for easy use. For example, training and saving a model requires only a few lines of code:

train_generator = TrainingGenerator('/path/to/data.h5')
model = StackedDenseNet(train_generator)
model.compile('adam', 'mse')
model.fit(batch_size=16, n_workers=8)
model.save('/path/to/model.h5')

Loading a trained model and running predictions on new data is also straightforward:

model = load_model('/path/to/model.h5')
new_data = load_new_data('/path/to/new/data.h5')
predictions = model.predict(new_data)

See our example notebooks for more details on how to use DeepPoseKit.

Installation

DeepPoseKit requires Tensorflow and Keras for training and using pose estimation models. These should be manually installed, along with dependencies such as CUDA and cuDNN, before installing DeepPoseKit:

DeepPoseKit has only been tested on Ubuntu 18.04, which is the recommended system for using the toolkit.

Install the development version with pip:

pip install git+https://www.github.com/jgraving/deepposekit.git

To use the annotation toolkit you must install this separately (see DeepPoseKit Annotator for details):

pip install git+https://www.github.com/jgraving/deepposekit-annotator.git

You can download example datasets for DeepPoseKit from our DeepPoseKit Data repository:

git clone https://www.github.com/jgraving/deepposekit-data

Citation

If you use DeepPoseKit for your research please cite our preprint:

@article{graving2019fast,
         title={Fast and robust animal pose estimation},
         author={Graving, Jacob M and Chae, Daniel and Naik, Hemal and Li, Liang and Koger, Benjamin and Costelloe, Blair R and Couzin, Iain D},
         journal={bioRxiv},
         pages={620245},
         year={2019},
         publisher={Cold Spring Harbor Laboratory}
         }

License

Released under a Apache 2.0 License. See LICENSE for details.

Development

Please submit bugs or feature requests to the GitHub issue tracker. Please limit reported issues to the DeepPoseKit codebase and provide as much detail as you can with a minimal working example if possible.

If you experience problems with Tensorflow or Keras, such as installing CUDA or cuDNN dependencies, then please direct issues to those development teams.

Contributors

DeepPoseKit was developed by Jake Graving and Daniel Chae, and is still being actively developed. We welcome public contributions to the toolkit. If you wish to contribute, please fork the repository to make your modifications and submit a pull request.

You can’t perform that action at this time.