Skip to content

zhaojiachen1994/Triangulation-Residual-Loss

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Triangulation Residual Loss

Introduction

Code for paper: Triangulation Residual Loss for Data-efficient 3D Pose Estimation. TR loss enables self-supervision with global 3D geometric consistency by minimizing the smallest singular value of the triangulation matrix. Particularly, TR loss aims to minimize the weighted sum of distances from the current 3D estimate to all view rays, so that the view rays will converge to a stable 3D point.

An example video is available here

Usage

  • Install mmcv==1.7.0 and mmpose==0.29.0 following the guideline
  • clone this project and install the requirements.

Datasets

  • Calm21(MARS) dataset: images could be download from here, the annotations could be downloaded from here

  • Dannce and THM datasets: The used images and annotations could be download from here

  • Human3.6M dataset is formulated in COCO annotation form.

  • Download the dataset to your loacal computer, then modify the 'data_root' in the config file to the downloaded path.

Train and evaluate

The pretrained backbone and models could be downloaded from here.

How to apply TR loss to your model?

The core code of TR loss is in TRL/models/heads/triangulate_head.py as following:

u, s, vh = torch.svd(A.view(-1, 4)) # A is the matrix defined in (13)
res_triang = s[-1] # res_triang is the TR Loss

Then add the TR loss to your final losses and perform gradient backpropagation.

Acknowledgements

[1] https://github.com/zhezh/adafuse-3d-human-pose

[2] https://github.com/karfly/learnable-triangulation-pytorch

[3] https://github.com/luminxu/Pose-for-Everything

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages