This repository provides the source code for the paper Pose Adaptive Dual Mixup for Few-Shot Single-View 3D Reconstruction published in AAAI-22. The implementation is on ShapeNet.
@inproceedings{padmix,
title={Pose Adaptive Dual Mixup for Few-Shot Single-View 3D Reconstruction},
author={Cheng, Ta-Ying and
Yang, Hsuan-Ru and
Trigoni, Niki and
Chen, Hwann-Tzong and
Liu, Tyng-Luh},
booktitle={AAAI},
year={2022}
}
git clone https://github.com/ttchengab/PADMix.git
The ShapeNet dataset is available below:
- ShapeNet rendering images: http://cvgl.stanford.edu/data2/ShapeNetRendering.tgz
- ShapeNet voxelized models: http://cvgl.stanford.edu/data2/ShapeNetVox32.tgz
Create two directories, one for saving templates, and the other for saving checkpoints:
mkdir template
mkdir ckpts_fewshot
To generate the priors, use the following command:
python saveTV.py
To train the ground truth autoencoder, use the following command:
python fewshot_AE.py
To train with Input Mixup, use the following command:
python fewshot_mixup_triplet.py
To train with Latent Mixup, use the following command:
python fewshot_latent_mixup.py
To evaluate the IoU, use the following command:
python fewshot_eval.py
The pretrained model under the one-shot setting on ShapeNet is available here.