Skip to content

Official implementation of the NeurIPS 2019 paper Domes to Drones: Self-Supervised Active Triangulation for 3D Human Pose Reconstruction

License

Notifications You must be signed in to change notification settings

sminchisescu-research/actor

 
 

Repository files navigation

Domes to Drones: Self-Supervised Active Triangulation for 3D Human Pose Reconstruction

Authors: Erik Gärtner*, Aleksis Pirinen* and Cristian Sminchisescu (* denotes first authorship).

actor-github

Overview

Official implementation of the NeurIPS 2019 paper Domes to Drones: Self-Supervised Active Triangulation for 3D Human Pose Reconstruction. This repo contains code for reproducing the results of our proposed ACTOR model and the baselines, as well as training ACTOR on Panoptic. A video overview of the paper is available here.

ACTOR is implemented in Caffe. The experiments are performed in the CMU Panoptic multi-camera framework. Our ACTOR implementation uses OpenPose as underlying 2d pose estimator. We have used a public TensorFlow implementation for pre-computing all pose and deep feature predictions from OpenPose.

actor-overview-notext-github

Citation

If you find this implementation and/or our paper interesting or helpful, please consider citing:

@inproceedings{pirinen2019domes,
    title={Domes to Drones: Self-Supervised Active Triangulation for 3D Human Pose Reconstruction},
    author={Pirinen, Aleksis and G{\"a}rtner, Erik and Sminchisescu, Cristian},
    booktitle={Advances in Neural Information Processing Systems},
    pages={3907--3917},
    year={2019}
}

Setup

  1. Clone the repository
  2. Read the following documentation on how to setup our system. This covers prerequisites and how to install our framework.
  3. See this dataset documentation for how to download and preprocess the Panoptic data, pre-compute OpenPose deep features and pose estimates and train/download instance features for matching.

Pretrained models

Pretrained model weights for ACTOR can be downloaded here.

Using ACTOR

Demo

The Matlab script demo.m contains code to reproduce the visualizations from the main paper. Running the script will create an output folder that contains a "recording" of the active-view showing errors, camera choices and reconstructions.

Training the model

To train the model run the command:

run_train_agent('train')

The results and weights will be stored in the location of CONFIG.output_dir.

Evaluating the model

Given the model weights (either the provided weights or your own):

  1. Set flag CONFIG.evaluation_mode = 'test';
  2. Set flag CONFIG.agent_random_init = 0;
  3. Set flag CONFIG.agent_weights = '<your-weights-path>';
  4. Set flag CONFIG.training_agent_nbr_eps = 1; (Note, this will not update weights, since they are updated every 40 eps.)
  5. Run run_train_agent('train');, results will be stored in the location of CONFIG.output_dir.

Acknowledgements

This work was supported by the European Research Council Consolidator grant SEED, CNCS-UEFISCDI PN-III-P4-ID-PCE-2016-0535 and PCCF-2016-0180, the EU Horizon 2020 Grant DE-ENIGMA, Swedish Foundation for Strategic Research (SSF) Smart Systems Program, as well as the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. We would also like to thank Patrik Persson for support with the drone experiments.

About

Official implementation of the NeurIPS 2019 paper Domes to Drones: Self-Supervised Active Triangulation for 3D Human Pose Reconstruction

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • MATLAB 84.2%
  • Python 7.6%
  • C 4.6%
  • Shell 2.7%
  • Other 0.9%