Skip to content

TRI-ML/road

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Recursive Octree Auto-Decoder (ROAD)

PyTorch implementation of the CoRL 2022 paper "ROAD: Learning an Implicit Recursive Octree Auto-Decoder to Efficiently Encode 3D Shapes".

road.gif

Installation

To set up the environment using conda, use the following commands:

conda create -n road python=3.10
conda activate road

Install Pytorch for your specific version of CUDA (11.6 in this example) as well as additional dependencies as provided in requirements.txt.

pip install torch==1.13.0+cu116 torchvision==0.14.0+cu116 --extra-index-url https://download.pytorch.org/whl/cu116
pip install -r requirements.txt

Training and Inference

To demonstrate the workflow of our pipeline, we include 3 mesh models from the HomebrewedDB dataset. The config.yaml file stores default parameters for training and evaluation and points to the provided 3 models.

To start training, run the following script, which will first generate and store training data given the provided meshes and then train ROAD using the curriculum procedure.

python train.py --config configs/config.yaml

To visualize the trained model run:

python visualize.py --config configs/config.yaml

Additionally, one can provide the lods parameter specifying a desired output level of detail (LoD).

python visualize.py --config configs/config.yaml --lods 5

To evaluate the trained model, run the following script:

python evaluate.py --config configs/config.yaml

Pre-trained Models

ROAD models pre-trained on Thingi32, Google Scanned Objects (GSO), and AccuCities can be found here:

Dataset # Objects Latent size Link
Thingi32 32 64 model
Thingi32 32 128 model
GSO 128 512 model
GSO 256 512 model
City 1 512 model

To visualize the pre-trained model, download it under the pretrained folder and run:

python visualize.py --config pretrained/model/config.yaml

Reference

@inproceedings{zakharov2022road,
    title={ROAD: Learning an Implicit Recursive Octree Auto-Decoder to Efficiently Encode 3D Shapes},
    author={Sergey Zakharov and Rares Ambrus and Katherine Liu and Adrien Gaidon},
    booktitle={Conference on Robot Learning (CoRL)},
    year={2022},
    url={https://arxiv.org/pdf/2212.06193.pdf}
    }

License

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

About

ROAD: Learning an Implicit Recursive Octree Auto-Decoder to Efficiently Encode 3D Shapes (CoRL 2022)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages