Skip to content

Official PyTorch implementation of "Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image ", ICCV 2019

License

Notifications You must be signed in to change notification settings

Zumbalamambo/3DMPPE_POSENET_RELEASE

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PWC

PWC

PoseNet of "Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image"

Introduction

This repo is official PyTorch implementation of Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image (ICCV 2019). It contains PoseNet part.

What this repo provides:

Dependencies

This code is tested under Ubuntu 16.04, CUDA 9.0, cuDNN 7.1 environment with two NVIDIA 1080Ti GPUs.

Python 3.6.5 version with Anaconda 3 is used for development.

Directory

Root

The ${POSE_ROOT} is described as below.

${POSE_ROOT}
|-- data
|-- common
|-- main
|-- vis
`-- output
  • data contains data loading codes and soft links to images and annotations directories.
  • common contains kernel codes for 3d multi-person pose estimation system.
  • main contains high-level codes for training or testing the network.
  • vis contains scripts for 3d visualization.
  • output contains log, trained models, visualized outputs, and test result.

Data

You need to follow directory structure of the data as below.

${POSE_ROOT}
|-- data
|-- |-- Human36M
|   `-- |-- bbox_root
|       |   |-- bbox_root_human36m_output.json
|       |-- images
|       `-- annotations
|-- |-- MPII
|   `-- |-- images
|       `-- annotations
|-- |-- MSCOCO
|   `-- |-- bbox_root
|       |   |-- bbox_root_coco_output.json
|       |-- images
|       |   |-- train/
|       |   |-- val/
|       `-- annotations
|-- |-- MuCo
|   `-- |-- data
|       |   |-- augmented_set
|       |   |-- unaugmented_set
|       |   `-- MuCo-3DHP.json
`-- |-- MuPoTS
|   `-- |-- bbox_root
|       |   |-- bbox_mupots_output.json
|       |-- data
|       |   |-- MultiPersonTestSet
|       |   `-- MuPoTS-3D.json

Output

You need to follow the directory structure of the output folder as below.

${POSE_ROOT}
|-- output
|-- |-- log
|-- |-- model_dump
|-- |-- result
`-- |-- vis
  • Creating output folder as soft link form is recommended instead of folder form because it would take large storage capacity.
  • log folder contains training log file.
  • model_dump folder contains saved checkpoints for each epoch.
  • result folder contains final estimation files generated in the testing stage.
  • vis folder contains visualized results.

3D visualization

  • Run $DB_NAME_img_name.py to get image file names in .txt format.
  • Place your test result files (preds_2d_kpt_$DB_NAME.mat, preds_3d_kpt_$DB_NAME.mat) in single or multi folder.
  • Run draw_3Dpose_$DB_NAME.m

Running 3DMPPE_POSENET

Start

  • In the main/config.py, you can change settings of the model including dataset to use, network backbone, and input size and so on.

Train

In the main folder, run

python train.py --gpu 0-1

to train the network on the GPU 0,1.

If you want to continue experiment, run

python train.py --gpu 0-1 --continue

--gpu 0,1 can be used instead of --gpu 0-1.

Test

Place trained model at the output/model_dump/.

In the main folder, run

python test.py --gpu 0-1 --test_epoch 20

to test the network on the GPU 0,1 with 20th epoch trained model. --gpu 0,1 can be used instead of --gpu 0-1.

Results

Here I report the performance of the PoseNet. Also, I provide pre-trained models of the PoseNetNet. Bounding box and root locations are obtained from DetectNet and RootNet.

Human3.6M dataset using protocol 1

For the evaluation, you can run test.py or there are evaluation codes in Human36M.

  • Bounding box [H36M_protocol1]
  • Bounding box + 3D Human root coordinatees in camera space [H36M_protocol1]
  • PoseNet model trained on Human3.6M protocol 1 + MPII [model]

Human3.6M dataset using protocol 2

For the evaluation, you can run test.py or there are evaluation codes in Human36M.

  • Bounding box [H36M_protocol2]
  • Bounding box + 3D Human root coordinatees in camera space [H36M_protocol2]
  • PoseNet model trained on Human3.6M protocol 2+ MPII [model]

MuPoTS-3D dataset

For the evaluation, run test.py. After that, move data/MuPoTS/mpii_mupots_multiperson_eval.m in data/MuPoTS/data. Also, move the test result files (preds_2d_kpt_mupots.mat and preds_3d_kpt_mupots.mat) in data/MuPoTS/data. Then run mpii_mupots_multiperson_eval.m with your evaluation mode arguments.

  • Bounding box [MuPoTS-3D]
  • Bounding box + 3D Human root coordinatees in camera space [MuPoTS-3D]
  • PoseNet model trained on MuCO-3DHP + MSCOCO [model]

MSCOCO dataset

We additionally provide estimated 3D human root coordinates in on the MSCOCO dataset. The coordinates are in 3D camera coordinate system, and focal lengths are set to 1500mm for both x and y axis. You can change focal length and corresponding distance using equation 2 or equation in supplementarial material of my paper

  • Bounding box + 3D Human root coordinates in camera space [MSCOCO]

Reference

@InProceedings{Moon_2019_ICCV_3DMPPE,
author = {Moon, Gyeongsik and Chang, Juyong and Lee, Kyoung Mu},
title = {Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image},
booktitle = {The IEEE Conference on International Conference on Computer Vision (ICCV)},
year = {2019}
}

About

Official PyTorch implementation of "Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image ", ICCV 2019

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 76.7%
  • MATLAB 23.3%