Skip to content
/ LG3DPose Public

Local-Global Feature Fusion for Enhancing 3D Human Pose Estimation

Notifications You must be signed in to change notification settings

ygx7/LG3DPose

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Local-Global Feature Fusion for Enhancing 3D Human Pose Estimation

This work is based on the MotionAGFormer and P-STMO, and you can get more help there.

Environment

The code is conducted under the following environment:

  • Ubuntu 20.04
  • Python 3.9.16
  • PyTorch 1.13.1
  • CUDA 11.7

Dataset

The dataset setting follow the MotionAGFormer. Please refer to it to set up datasets (under ./data directory).

Evaluation

Then run the command below (evaluate on 243 frames input):

  • Human3.6M
python train.py --eval-only --checkpoint checkpoint --checkpoint-file 37.7h36m.pth.tr --config configs/h36m/LG3DPose.yaml
  • MPI-INF-3DHP
python train.py --eval-only --checkpoint checkpoint --checkpoint-file 16.4mpi.pth.tr --config configs/mpi/LG3DPose.yaml

Training from scratch

Training our model with GPU:

  • Human3.6M
python train.py --config configs/h36m/LG3DPose.yaml --use-wandb --wandb-name LG3DPose
  • MPI-INF-3DHP
python train.py --config configs/mpi/LG3DPose.yaml --use-wandb --wandb-name LG3DPose-MPI

Acknowledgement

Thanks for the baselines, we construct the code based on them:

  • MotionAGFormer
  • P-STMO

About

Local-Global Feature Fusion for Enhancing 3D Human Pose Estimation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages