Skip to content

wei-mao-2019/HisRepItself

Repository files navigation

History Repeats Itself: Human Motion Prediction via Motion Attention

This is the code for the paper

Wei Mao, Miaomiao Liu, Mathieu Salzmann. History Repeats Itself: Human Motion Prediction via Motion Attention. In ECCV 20.

Wei Mao, Miaomiao Liu, Mathieu Salzmann, Hongdong Li. Multi-level Motion Attention for Human Motion Prediction. In IJCV 21.

Dependencies

  • cuda 10.0
  • Python 3.6
  • Pytorch >1.0.0 (Tested on 1.1.0 and 1.3.0)

Get the data

Human3.6m in exponential map can be downloaded from here.

  • UPDATE 2024-02: It seems the above link does not work any more. Please try to download the dataset from here. Please follow the license of the dataset.

Directory structure:

H3.6m
|-- S1
|-- S5
|-- S6
|-- ...
`-- S11

AMASS from their official website..

Directory structure:

amass
|-- ACCAD
|-- BioMotionLab_NTroje
|-- CMU
|-- ...
`-- Transitions_mocap

3DPW from their official website.

Directory structure:

3dpw
|-- imageFiles
|   |-- courtyard_arguing_00
|   |-- courtyard_backpack_00
|   |-- ...
`-- sequenceFiles
    |-- test
    |-- train
    `-- validation

Put the all downloaded datasets in ./datasets directory.

Training

All the running args are defined in opt.py. We use following commands to train on different datasets and representations. To train,

python main_h36m_3d.py --kernel_size 10 --dct_n 20 --input_n 50 --output_n 10 --skip_rate 1 --batch_size 32 --test_batch_size 32 --in_features 66
python main_h36m_ang.py --kernel_size 10 --dct_n 20 --input_n 50 --output_n 10 --skip_rate 1 --batch_size 32 --test_batch_size 32 --in_features 48
python main_amass_3d.py --kernel_size 10 --dct_n 35 --input_n 50 --output_n 25 --skip_rate 5 --batch_size 128 --test_batch_size 128 --in_features 54 

Training of multi-level attention

To train joint-level attention

python main_h36m_3d_joints.py --in_features 66 --kernel_size 10 --dct_n 20 --input_n 50 --output_n 10 --skip_rate 1 --batch_size 32 --test_batch_size 32

To train part-level attention

python main_h36m_3d_parts.py --in_features 66 --kernel_size 10 --dct_n 20 --input_n 50 --output_n 10 --skip_rate 1 --batch_size 32 --test_batch_size 32

To train post fusion model. (Since the pretrained joint and part-level attantion models exceed the github file limit, we compress the checkpoints.)

python python main_h36m_3d_post_fusion.py --in_features 66 --kernel_size 10 --dct_n 20 --input_n 50 --output_n 10 --skip_rate 1 --batch_size 32 --test_batch_size 32 --epoch 20

Evaluation

To evaluate the pretrained model,

python main_h36m_3d_eval.py --is_eval --kernel_size 10 --dct_n 20 --input_n 50 --output_n 25 --skip_rate 1 --batch_size 32 --test_batch_size 32 --in_features 66 --ckpt ./checkpoint/pretrained/h36m_3d_in50_out10_dctn20/
python main_h36m_ang_eval.py --is_eval --kernel_size 10 --dct_n 20 --input_n 50 --output_n 25 --skip_rate 1 --batch_size 32 --test_batch_size 32 --in_features 48 --ckpt ./checkpoint/pretrained/h36m_ang_in50_out10_dctn20/
python main_amass_3d_eval.py --is_eval --kernel_size 10 --dct_n 35 --input_n 50 --output_n 25 --skip_rate 5 --batch_size 128 --test_batch_size 128 --in_features 54 --ckpt ./checkpoint/pretrained/amass_3d_in50_out25_dctn30/

Citing

If you use our code, please cite our work

@inproceedings{wei2020his,
  title={History Repeats Itself: Human Motion Prediction via Motion Attention},
  author={Wei, Mao and Miaomiao, Liu and Mathieu, Salzemann},
  booktitle={ECCV},
  year={2020}
}

@article{mao2021multi,
  title={Multi-level motion attention for human motion prediction},
  author={Mao, Wei and Liu, Miaomiao and Salzmann, Mathieu and Li, Hongdong},
  journal={International Journal of Computer Vision},
  volume={129},
  number={9},
  pages={2513--2535},
  year={2021},
  publisher={Springer}
}

Acknowledgments

The overall code framework (dataloading, training, testing etc.) is adapted from 3d-pose-baseline.

The predictor model code is adapted from LTD.

Some of our evaluation code and data process code was adapted/ported from Residual Sup. RNN by Julieta.

Licence

MIT

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages