Paper | Project Page | Video | Lightning Version
This repository contains the official implementation for our publication: "Efficient and Scalable Point Cloud Generation with Sparse Point-Voxel Diffusion Models."
- 12/8/2024: Arxiv submission of the SPVD preprint.
- 12/9/2024: Release of SPVD Lightning. We replace the pclab custom library with Pytorch Lightning ⚡
We recommend using Anaconda to manage your Python environment.
conda create --name spvd python=3.9
conda activate spvd
git clone https://github.com/JohnRomanelis/SPVD.git
We have tested our code with PyTorch 2.0 and CUDA 11.8. You can install the compatible version using the following command:
conda install pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 pytorch-cuda=11.8 -c pytorch -c nvidia
You can also install most of the required libraries through the requirements.txt
by running:
pip install -r requirements.txt
pclab is an helper library, based on the fast.ai Practical Deep Learning-Part 2 course.
Note: Make sure you have installed PyTorch before install pclab to make sure you install the correct version.
- Clone the pclab repository.
git clone https://github.com/JohnRomanelis/pclab.git
- Navigate into the pclab directory:
cd pclab
- Install pclab. This will automatically install the required dependencies:
pip install -e .
- TorchSparse depends on the Google Sparse Hash librabry. To install on ubuntu run:
sudo apt-get install libsparsehash-dev
- Clone the torchsparse repo:
git clone https://github.com/mit-han-lab/torchsparse.git
- Navigate into the torchsparse directory:
cd torchsparse
- Install torchsparse:
pip install -e .
- Chamfer
- Navigate to the SPVD/metrics/chamfer_dist directory:
cd SPVD/metrics/chamfer_dist
- Run:
python setup.py install --user
- EMD
- Navigate to the SPVD/metrics/PyTorchEMD directory:
cd SPVD/metrics/PyTorchEMD
- Run:
python setup.py install
- Run:
cp ./build/lib.linux-x86_64-cpython-310/emd_cuda.cpython-310-x86_64-linux-gnu.so .
If an error is raised in this last command, list all directories inside build and replace the name of the derictory with the one in your pc named lib.linux-x86_64-cpython-*
You can replicate all the experiments from our paper using the notebooks provided in the experiments
folder. Below is a catalog of the experiments featured in our paper, along with brief descriptions.
-
TrainGeneration: Train a generative model for unconditional point cloud generation in a single class of ShapeNet.
-
ConditionalGeneration: Train a conditional model on all categories of ShapeNet.
-
TrainCompletion: Train a model for part completion on PartNet.
-
SuperResolution: Train a model for super resolution on Point Clouds.
A more comprehensive list, including additional comments and experiments, is available here.
All the #export
commands are used with the `utils/notebook2py.py' script, to export parts of the notebooks to .py scripts.
For generation, we use the same version of ShapeNet as PointFlow. Please refer to their instructions for downloading the dataset.
For completion we use PartNet. Download the data from the official PartNet website. To process the data check the PartNetDataset notebook.
If you find this work useful in your research, please consider citing:
@misc{romanelis2024efficientscalablepointcloud,
title={Efficient and Scalable Point Cloud Generation with Sparse Point-Voxel Diffusion Models},
author={Ioannis Romanelis and Vlassios Fotis and Athanasios Kalogeras and Christos Alexakos and Konstantinos Moustakas and Adrian Munteanu},
year={2024},
eprint={2408.06145},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2408.06145},
}