Skip to content

KAIST-Visual-AI-Group/APAP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

As-Plausible-As-Possible (APAP)

teaser

Project Page | Paper | arXiv

Seungwoo Yoo*1, Kunho Kim*1, Vladimir G. Kim2, Minhyuk Sung1 (* co-first authors)

1KAIST, 2Adobe Research

This is the reference implementation of As-Plausible-As-Possible: Plausibility-Aware Mesh Deformation Using 2D Diffusion Priors (CVPR 2024).

Get Started

Clone the repository and create a Python environment:

git clone https://github.com/KAIST-Visual-AI-Group/APAP
cd APAP
conda create --name apap python=3.9
conda install pytorch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 pytorch-cuda=11.7 -c pytorch -c nvidia
conda install pytorch-sparse -c pyg
pip install wandb
pip install diffusers==0.19.0
pip install accelerate transformers ninja
pip install cholespy libigl
pip install imageio[ffmpeg] jaxtyping tyro
pip install fpsample trimesh pymeshlab pyrender

You also need to follow the instructions in the following repositories to install dependencies:

We provide data necessary to run our codes via Google Drive. Specifically, you can download

Download the files from the link and place them under the directory data. After that, the directory structure should look like:

APAP
├── data
│   ├── apap_2d  # APAP-Bench 2D
│   ├── apap_3d  # APAP-Bench 3D
│   ├── lora_ckpts  # LoRA checkpoint files
│   ├── pretrained_models  # Additional pretrained models (e.g., SAM)
│   └── ...
├── ext
├── scripts
├── src
├── environment.yaml
└── README.md

Making Deformations using APAP-Bench

To run 3D mesh deformation experiments using APAP-Bench (3D), run:

python scripts/exp/batch/batch_deform_meshes.py \
--data-list-path configs/deform_meshes/data/apap_3d.txt
--out-root outputs/apap-3d
--gpu-ids 0

Note that the experiments can be parallelized by specifying multiple GPU IDs via the argument --gpu-ids.

Similarly, 2D mesh deformation experiments using APAP-Bench (2D) can be done by running:

python scripts/exp/batch/batch_deform_meshes.py \
--data-list-path configs/deform_meshes/data/apap_2d-all.txt
--out-root outputs/apap-2d
--gpu-ids 0

Fine-tuning Stable Diffusion using LoRA

We directly adapt Dreambooth training script from diffusers without modification. For convenience, we provide a batch script that allows users to train multiple LoRAs in parallel. To run the script, simply execute:

python scripts/lora/batch_train_dreambooth_lora.py \
--data-list-path configs/lora_train/apap_3d.txt \
--exp-group-name apap-3d-lora \
--out-root outputs/lora_ckpts/apap-3d \
--gpu-ids 0

This will produce LoRA checkpoints, each fine-tuned to the renderings of meshes in APAP-Bench (3D). Note that each row of a training config file consists of two items - object_name and data_dir. The object_name is used to automatically populate a text prompt used during fine-tuning and the data_dir is a directory containing images for fine-tuning.
After training, the outputs are arranged into a directory structure as follows:

{out-root}
├── object_name1
│   ├── 0000  # Identifier for image dataset 
│   └── ...
├── object_name2
│   ├── 0000  # Identifier for image dataset 
│   └── ...
├── object_name3
│   ├── 0000  # Identifier for image dataset 
│   └── ...
└── ...

The checkpoint directories can be passed to the script deform_meshes.py via the command-line argument --lora-dir.

Citation

Please consider citing our work if you find this codebase useful:

@inproceedings{yoo2024apap,
  title = {{As-Plausible-As-Possible: Plausibility-Aware Mesh Deformation Using 2D Diffusion Priors}},
  author = {Yoo, Seungwoo and Kim, Kunho and Kim, Vladimir G. and Sung, Minhyuk},
  booktitle = {CVPR},
  year = {2024},
}

About

The reference implementation of As-Plausible-As-Possible: Plausibility-Aware Mesh Deformation Using 2D Diffusion Priors (CVPR 2024).

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages