Skip to content

MMintLab/VIRDO

Repository files navigation

VIRDO: Visio-tactile Implicit Representations of Deformable Objects

Code style: black CodeQL

This is a github repository of a Visio-tactile Implicit Representations of Deformable Objects (ICRA 2022). Codes are based on siren and pointnet repositories.

Quick Start

Reconstruction & latent space composition Open In Colab

inference using partial pointcloud Open In Colab

Step 0: Set up the environment

conda create -n virdo python=3.8
conda activate virdo
conda install pytorch==1.7.0 torchvision==0.8.0 torchaudio==0.7.0 cudatoolkit=11.0 -c pytorch
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
conda install -c bottler nvidiacub
conda install pytorch3d=0.4.0 -c pytorch3d
pip install -r requirements.txt
pip install --ignore-installed open3d

After installation, resource or reopen the terminal.

Step 1: Download pretrained model and dataset

Make sure to install wget $ apt-get install wget and unzip $ apt-get install unzip

source download.sh
download_dataset
download_pretrained

(Optionally) Manual

Alternatively, you can manually download the datasets and pretrained models from here. Then put the files as below:

── VIRDO
│   ├── data
│   │   │── virdo_simul_dataset.pickle
│   ├── pretrained_model
│   │   │── force_final.pth
│   │   │── object_final.pth
│   │   │── deform_final.pth

Step 2: Pretrain nominal shapes

python pretrain.py --config config/virdo.yaml --gpu_id 0

If you want to check the result of your pretrained model,

python pretrain.py --config config/virdo.yaml --gpu_id 0 --from_pretrained logs/pretrain/checkpoints/shape_latest.pth

then you will see the nominal reconstructions in logs/pretrain/ply directory.

Step 3: Train entire dataset

python train.py --config config/virdo.yaml --gpu_id 0 --pretrain_path logs/pretrain/checkpoints/shape_latest.pth

Releases

No releases published

Packages

No packages published