Skip to content

aleksispi/pose-drl

Repository files navigation

Deep Reinforcement Learning for Active Human Pose Estimation

Authors: Aleksis Pirinen*, Erik Gärtner*, and Cristian Sminchisescu (* denotes first authorship).

visualization

Overview

Official implementation of the AAAI 2020 paper Deep Reinforcement Learning for Active Human Pose Estimation. This repository contains code for producing results for our Pose-DRL model and the baselines that match those reported in the paper, as well as training Pose-DRL on Panoptic. Our paper is on arXiv too. A video overview of this paper is available here, with step-by-step visualizations here and here.

Pose-DRL is implemented in Caffe. The experiments are performed in the CMU Panoptic multi-camera framework. The Pose-DRL model in this repository uses MubyNet as underlying 3d human pose estimator.

overview

Citation

If you find this implementation and/or our paper interesting or helpful, please consider citing:

@inproceedings{gartner2020activehpe,
  title={Deep Reinforcement Learning for Active Human Pose Estimation},
  author={G\"{a}rtner, Erik and Pirinen, Aleksis and Sminchisescu, Cristian},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  year={2020}
}

Setup

  1. Clone the repository.
  2. Read the following documentation on how to setup our system, assuming you are using pre-computed feature maps and pose predictions for MubyNet (see the next item on how to pre-compute these). This covers prerequisites and how to install our framework.
  3. See this dataset documentation for how to download and preprocess the Panoptic data, pre-compute MubyNet deep features and pose estimates and train/download instance features for matching.

Pretrained models

Pretrained model weights for Pose-DRL can be downloaded here.

Using Pose-DRL

Training the model

To train the model run the command:

run_train_agent('train')

The results and weights will be stored in the location of CONFIG.output_dir.

Evaluating the model

Given the model weights (either the provided weights or your own):

  1. Set flag CONFIG.evaluation_mode = 'test';
  2. Set flag CONFIG.agent_random_init = 0;
  3. Set flag CONFIG.agent_weights = '<your-weights-path>';
  4. Set flag CONFIG.training_agent_nbr_eps = 1; (Note, this will not update weights, since they are updated every 40 eps.)
  5. Run run_train_agent('train');, results will be stored in the location of CONFIG.output_dir.

Acknowledgements

This work was supported by the European Research Council Consolidator grant SEED, CNCS-UEFISCDI PN-III-P4-ID-PCE-2016-0535, the EU Horizon 2020 Grant DE-ENIGMA, SSF, as well as the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. Finally, we would like to thank Alin Popa, Andrei Zanfir, Mihai Zanfir and Elisabeta Oneata for helpful discussions and support.

About

Official implementation of the AAAI 2020 paper "Deep Reinforcement Learning for Active Human Pose Estimation"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published