Skip to content
/ BDMM Public
forked from rocketappslab/BDMM

Official PyTorch implementation of BDMM: Bidirectionally Deformable Motion Modulation For Video-based Human Pose Transfer [ICCV 2023]

License

Notifications You must be signed in to change notification settings

camenduru/BDMM

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Bidirectionally Deformable Motion Modulation

Official PyTorch implementation of BDMM: "Bidirectionally Deformable Motion Modulation For Video-based Human Pose Transfer" [ICCV 2023]
Wing-Yin Yu, Lai-Man Po, Ray C.C. Cheung, Yuzhi Zhao, Yu Xue, Kun Li
Department of Electrical Engineering, City University of Hong Kong

Abstract

Video-based human pose transfer is a video-to-video generation task that animates a plain source human image based on a series of target human poses. Considering the difficulties in transferring highly structural patterns on the garments and discontinuous poses, existing methods often generate unsatisfactory results such as distorted textures and flickering artifacts. To address these issues, we propose a novel Deformable Motion Modulation (DMM) that utilizes geometric kernel offset with adaptive weight modulation to simultaneously perform feature alignment and style transfer. Different from normal style modulation used in style transfer, the proposed modulation mechanism adaptively reconstructs smoothed frames from style codes according to the object shape through an irregular receptive field of view. To enhance the spatio-temporal consistency, we leverage bidirectional propagation to extract the hidden motion information from a warped image sequence generated by noisy poses. The proposed feature propagation significantly enhances the motion prediction ability by forward and backward propagation. Both quantitative and qualitative experimental results demonstrate superiority over the state-of-the-arts in terms of image fidelity and visual continuity.

Getting Started

Our generated images

Our generated images on fashion test set can be downloaded from OneDrive.

Installation

Step 1: Clone the Github repository

git clone https://github.com/rocketappslab/BDMM.git
cd BDMM

Step 2: Create the system environment and set the dependencies

conda create -n bdmm python=3.7
pip install -r requirements.txt
conda install pytorch==1.8.1 torchvision==0.9.1 torchaudio==0.8.1 cudatoolkit=10.2 -c pytorch

Step 3: Install the neural renderer for the SMPL

cd SMPLDataset/mesh_renderer/
python setup.py install
cd ../../

Step 4: Download the weights for SMPLDataset from OneDrive. Unzip the file and put it under the SMPLDataset folder as shown below.

+—BDMM
|    +—SMPLDataset
|       +--checkpoints
|           +-- gmm_08.pkl
|           +-- openpose_body25.pth
|           +-- smpl_model.pkl
|           +-- spin_ckpt.pth
|       +—human_cropper
|       +—human_digitalizer
|       ...

Dataset

[Option 1] You can directly download the pre-processed fashion dataset from OneDrive. Unzip the file and put it under the dataset folder as shown below. (Create the dataset folder if necessary.)

+—BDMM
|    +—dataset
|       +--UBC_fashion_smpl
|           +-- test
|           +-- test_frames
|           +-- train
|           +-- train_frames
|           +-- fid_stat.npz
|           +-- test_list.csv

[Option 2] You can also download the dataset from the official website and process it by following command:

python SMPLDataset/process_fashion.py --video_dir dataset/UBC_fashion/ --output_dir dataset/UBC_fashion_smpl

Training

You can train the network by following command. Please change the batchSize and gpu_id if necessary.

python train.py \
--name bdmm_dancefashion_checkpoints \
--display_freq 100 \
--print_freq 100 \
--batchSize 1 \
--nThreads 4 \
--gpu_id 0 \
--model dance \
--dataset_mode dance \
--sub_dataset fashion \
--dataroot ./dataset/UBC_fashion_smpl \
--niter 50000 \
--niter_decay 50000

Testing

You can test the network by following command. Please download the pre-trained weight for fashion dataset from OneDrive. Or change the name to test your experiments.

+—BDMM
|    +—checkpoints
|       +--bdmm_dancefashion_checkpoints
|           +-- latest_net_D.pth
|           +-- latest_net_D_V.pth
|           +-- latest_net_G.pth
python test.py \
--name bdmm_dancefashion_checkpoints \
--batchSize 1 \
--gpu_id 0 \
--model dance \
--dataset_mode dance \
--sub_dataset fashion \
--dataroot ./dataset/UBC_fashion_smpl \
--test_list test_list.csv

Evaluation

We provide a script that can evaluate SSIM, PSNR, L1, FID, LPIPS and FVD metrics.You can evaluate the network by following command.

python ./evaluation/getMetrics_animation.py \
--gt_root ./dataset/UBC_fashion_smpl \
--name bdmm_dancefashion_checkpoints

Citation

If you use this code for your research, please cite our paper.

@article{yu2023bidirectionally,
  title={Bidirectionally Deformable Motion Modulation For Video-based Human Pose Transfer},
  author={Yu, Wing-Yin and Po, Lai-Man and Cheung, Ray and Zhao, Yuzhi and Xue, Yu and Li, Kun},
  journal={arXiv preprint arXiv:2307.07754},
  year={2023}
}

Acknowledgments

Our code is based on GFLA, iPERCore and DCNv2, thanks for their great works.

About

Official PyTorch implementation of BDMM: Bidirectionally Deformable Motion Modulation For Video-based Human Pose Transfer [ICCV 2023]

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 89.5%
  • Cuda 7.9%
  • C++ 2.6%