This work is based on the MotionAGFormer and P-STMO, and you can get more help there.
The code is conducted under the following environment:
- Ubuntu 20.04
- Python 3.9.16
- PyTorch 1.13.1
- CUDA 11.7
The dataset setting follow the MotionAGFormer. Please refer to it to set up datasets (under ./data directory).
- Download our pretrained model from Google Drive;
Then run the command below (evaluate on 243 frames input):
- Human3.6M
python train.py --eval-only --checkpoint checkpoint --checkpoint-file 37.7h36m.pth.tr --config configs/h36m/LG3DPose.yaml- MPI-INF-3DHP
python train.py --eval-only --checkpoint checkpoint --checkpoint-file 16.4mpi.pth.tr --config configs/mpi/LG3DPose.yamlTraining our model with GPU:
- Human3.6M
python train.py --config configs/h36m/LG3DPose.yaml --use-wandb --wandb-name LG3DPose- MPI-INF-3DHP
python train.py --config configs/mpi/LG3DPose.yaml --use-wandb --wandb-name LG3DPose-MPIThanks for the baselines, we construct the code based on them:
- MotionAGFormer
- P-STMO

