This repository is the official implementation of our ICCV 2023 paper:
MAPConNet: Self-supervised 3D Pose Transfer with Mesh and Point Contrastive Learning. [arXiv]
Jiaze Sun1, Zhixiang Chen2, Tae-Kyun Kim1,3
1Imperial College London, 2University of Sheffield, 3Korea Advanced Institute of Science and Technology
Abstract: 3D pose transfer is a challenging generation task that aims to transfer the pose of a source geometry onto a target geometry with the target identity preserved. Many prior methods require keypoint annotations to find correspondence between the source and target. Current pose transfer methods allow end-to-end correspondence learning but require the desired final output as ground truth for supervision. Unsupervised methods have been proposed for graph convolutional models but they require ground truth correspondence between the source and target inputs. We present a novel self-supervised framework for 3D pose transfer which can be trained in unsupervised, semi-supervised, or fully supervised settings without any correspondence labels. We introduce two contrastive learning constraints in the latent space: a mesh-level loss for disentangling global patterns including pose and identity, and a point-level loss for discriminating local semantics. We demonstrate quantitatively and qualitatively that our method achieves state-of-the-art results in supervised 3D pose transfer, with comparable results in unsupervised and semi-supervised settings. Our method is also generalisable to unseen human and animal data with complex topologies.
- Install Anaconda with Python 3.6, then install the following dependencies:
conda install pytorch==1.4.0 torchvision==0.5.0 cudatoolkit=10.0 -c pytorch
conda install -c conda-forge pymesh2
conda install -c conda-forge tqdm
Navigate to the MAPConNet
root directory:
cd MAPConNet
Our implementation is based on 3D-CoreNet. We also include code from Synchronized-BatchNorm-PyTorch under models/networks/sync_batchnorm
.
-
Our human data is the same as NPT and 3D-CoreNet, which is generated using SMPL. It can be downloaded here.
-
Our animal data is the same as 3D-CoreNet, which is generated using SMAL. It can be downloaded here.
-
The downloaded files should be decompressed into folders
npt-data
andsmal-data
for humans and animals respectively. These should be placed under--dataroot
, which by default is../data
.
There are two dataset modes, human
and animal
, which correspond to SMPL and SMAL data respectively. This is specified using the --dataset_mode
option.
To train a model from scratch in a fully supervised manner, run the following command:
python train.py --dataset_mode [dataset_mode] --dataroot [parent directory of npt-data] --exp_name [name of experiment] --gpu_ids 0,1
To train a model from scratch in a fully unsupervised manner, run the following command:
python train.py --dataset_mode [dataset_mode] --dataroot [parent directory of npt-data] --exp_name [name of experiment] --gpu_ids 0,1 --percentage 0 --use_unlabelled
To train a model from scratch in a semi-unsupervised manner where the labelled set contains 50% of all available identities and poses, run the following command:
python train.py --dataset_mode [dataset_mode] --dataroot [parent directory of npt-data] --exp_name [name of experiment] --gpu_ids 0,1 --percentage 50 --use_unlabelled
The checkpoints during training will be saved to output/[dataset_mode]/[exp_name]/checkpoints/
.
Additional training options with descriptions can be found or added in the files ./options/base_options.py
and ./options/train_options
. For instance, to resume training from the latest checkpoint, append --continue_train
to the training command.
To test a newly trained model, run the following command:
python test.py --dataset_mode [dataset_mode] --dataroot [parent directory of npt-data] --exp_name [name of experiment] --test_epoch [the epoch to load] --metric [the metric to use] --save_output
The quantitative results will be saved to .output/[dataset_mode]/[exp_name]/[epoch]/
, and the final and warped outputs are saved to .output/[dataset_mode]/[exp_name]/[epoch]/outputs/
.
There are two options for --metric
: PMD
and CD
, which are Pointwise Mesh Distance and Chamfer Distance respectively. Additional test options with descriptions can be found or added in the file ./options/test_options.py
.
We provide pretrained checkpoints for models (D) and (N) in Table 1 of our paper, which are the best performing models on SMPL and SMAL respectively. The checkpoints can be downloaded from here. The downloaded output
folder should be put under the MAPConNet
directory.
- To load the human model checkpoints during testing, set
--dataset_mode
tohuman
andexp_name
toSMPL
. - To load the animal model checkpoints during testing, set
--dataset_mode
toanimal
andexp_name
toSMAL
.
If you find our work useful for your research work, please kindly cite our paper:
@InProceedings{Sun2023MAPConNet,
author = {Sun, Jiaze and Chen, Zhixiang and Kim, Tae-Kyun},
title = {MAPConNet: Self-supervised 3D Pose Transfer with Mesh and Point Contrastive Learning},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2023},
pages = {14452-14462}
}