Skip to content
A method for training a 3d pose estimation algorithm from relative depth annotations. Implemented in Pytorch.
Python Jupyter Notebook HTML JavaScript
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
checkpoint First commit Aug 4, 2018
demo_data Added manual_keypoint_clicker.py Jan 19, 2019
gui added GUI code Sep 10, 2018
mturk_data First commit Aug 4, 2018
opts added input dropout parameter so model can work with missing input 2d… Sep 20, 2018
paper_exps First commit Aug 4, 2018
paper_plots added input dropout parameter so model can work with missing input 2d… Sep 20, 2018
pics First commit Aug 4, 2018
src Merge pull request #1 from macaodha/master Sep 21, 2018
.gitignore added input dropout parameter so model can work with missing input 2d… Sep 20, 2018
LICENSE First commit Aug 4, 2018
main_human36.py added input dropout parameter so model can work with missing input 2d… Sep 20, 2018
main_lsp.py First commit Aug 4, 2018
readme.md Update readme.md Nov 8, 2018
run_model.ipynb
run_model.py added input dropout parameter so model can work with missing input 2d… Sep 20, 2018

readme.md

It's all Relative: Monocular 3D Human Pose Estimation from Weakly Supervised Data

This repository recreates the experiments and plots from our BMVC 2018 paper. You can find the paper here, along with additional data on our project website.

Overview

  • run_model.py : Runs a pre-trained model on an input numpy array containing a 2D pose and predicts and visualizes the corresponding 3D pose.
  • main_human36.py: Trains and tests a model with 17 2d input and 3d output keypoints on the Human3.6M dataset.
  • main_lsp.py : Trains and tests a model with 14 2d input and 3d output keypoints on the LSP dataset.
  • paper_exps/ : Contains the files for replicating the experiments contained in the paper. NOTE: Running the files in this directory will train the models from scratch. You can find the instructions for downloading pre-trained models here.
  • paper_plots/: Contains the files to plot the figures contained in the paper. NOTE: To plot a figure you must either run the corresponding file from paper_exps/ or download the pre-trained models from checkpoints/.
  • opts/: Contains the parameter settings to train and test a new model on either Human3.6M or LSP and to replicate the experiments in the paper.

Requirements

The code was developed using Python 2.7 and PyTorch 0.3.1.

Results

Here we show some sample outputs from our model on the LSP dataset. For each set of results we first show the input image, followed by the results of the fully supervised lifting approach of Martinez et al. (3D Supervised H36). Despite using significantly less information at training time our model produces plausible output poses (Ours Relative H36). Fine-tuning on crowd annotations collected on LSP further improves the quality of the 3D poses (Ours Relative H36 + LSP FT).

LSP Results

Video

Watch here

Reference

If you find our work useful in your research please cite our paper:

@inproceedings{relativeposeBMVC18,
  title     = {It's all Relative: Monocular 3D Human Pose Estimation from Weakly Supervised Data},
  author    = {Ronchi, Matteo Ruggero and Mac Aodha, Oisin and Eng, Robert and Perona, Pietro},
  booktitle = {BMVC},
  year = {2018}
}

Original Code

Uses some code from here which is a PyTorch implementation of Martinez et al.'s tensorflow code.

You can’t perform that action at this time.