Code repository for this paper:
DynaVol: Unsupervised Learning for Dynamic Scenes through Object-Centric Voxelization
Yanpeng Zhao, Siyu Gao, Yunbo Wang†, Xiaokang Yang
-[2024.1.17] DynaVol got accepted by ICLR2024!
git clone https://github.com/zyp123494/DynaVol.git
cd DynaVol
pip install -r requirements.txt
Pytorch, torch_scatter and DGL(CPU version is sufficient) installation is machine dependent, please install the correct version for your machine.
DynaVol dataset is available at GoogleDrive or OneDrive. For each scene, we release the static data, dynamic data, and dynamic data which is collected by 4 fixed views(can be used to train DeVRF). Please refer to the following data structure for an overview of DynaVol dataset.
[3ObjFall|6ObjFall|...]
├── static
│ └── ├── [train|val|test]
│ └── transforms_[train|val|test].json
├── dynamic
│ └── ├── [train|val|test]
│ └── transforms_[train|val|test].json
└── dynamic_4views
└── ├── [train]
└── transforms_[train].json
For more details of the DynaVol dataset and the code to generate it, please refer to DynaVol_dataset.
Stage1: Warmup stage
$ cd warmup
$ bash run_full.sh
Stage2: Dynamic grounding stage, modify the static_model_path in config to the checkpoint of the first stage(e.g. "/DynaVol/warmup/exp/3ObjFall/fine_last_n.tar").
$ cd ../dynamic_grounding
$ bash run.sh
Code for real-world data is coming soon!
If you find our work helps, please cite our paper.
@inproceedings{zhao2024dynavol,
author={Yanpeng Zhao and Siyu Gao and Yunbo Wang and Xiaokang Yang},
title={DynaVol: Unsupervised Learning for Dynamic Scenes through Object-Centric Voxelization},
booktitle = {International Conference on Learning Representations},
year={2024}
}
This codebase is based on DirectVoxGO and DeVRF.