Skip to content

zju3dv/Vox-Fusion

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Vox-Fusion

Vox-Fusion: Dense Tracking and Mapping with Voxel-based Neural Implicit Representation

Xingrui Yang*, Hai Li*, Hongjia Zhai, Yuhang Ming, Yuqian Liu, Guofeng Zhang.

ISMAR 2022

Correction

We found a bug in the evaluation script which affected the estimated pose accuracy in Tables 1 and 3 in the original paper. We have corrected this problem and re-run the results with updated configurations. The corrected results are comparable (even better for Replica dataset) to the originally reported results in the paper, which do not affect the contribution and conclusion of our work. We have updated the arxiv version of our paper and publish all the latest results (including mesh, pose, gt, eval scripts and training configs) on Google Drive, in case anyone wants to reproduce our results and compare them using different metrics.

table1 table3

Please use the config file replica_all.yaml and scannet_all.yaml in the Google Drive to replicate the results from the paper !!!

Installation

It is recommended to install Pytorch (>=1.10) manually for your hardware platform first. You can then install all dependancies using pip or conda:

pip install -r requirements.txt

After you have installed all third party libraries, run the following script to build extra Pytorch modules used in this project.

sh install.sh

Replace the filename in mapping.py with the built library

torch.classes.load_library("third_party/sparse_octree/build/lib.xxx/svo.xxx.so")

Demo

It is simple to run Vox-Fusion on datasets that already have dataloaders. src/datasets list all existing dataloaders. You can of course build your own, we will come back to it later. For now, we use the replica dataset as an example.

First you have to modify configs/replica/room_0.yaml so the data_path section points to the real dataset path. Now you are all set to run the code:

python demo/run.py configs/replica/room_0.yaml

You should now see a progress bar and some output indicating the system is now running. For now you have to rely on the progress bar to estimate the running time as we are still working on a working GUI.

Custom Datasets

You can use virtually any RGB-D dataset with Vox-Fusion including self-captured ones. Make sure to adapt the config files and dataloaders and put them in the correct folder. Make sure to implement a get_init_pose function for your dataloader, please refer to src/datasets/tum.py for an example.

Acknowledgement

Some of our codes are adapted from Nerual RGB-D Surface Reconstruction and BARF: Bundle Adjusted NeRF.

Citation

If you find our code or paper useful, please cite

@inproceedings{yang2022vox,
  title={Vox-Fusion: Dense Tracking and Mapping with Voxel-based Neural Implicit Representation},
  author={Yang, Xingrui and Li, Hai and Zhai, Hongjia and Ming, Yuhang and Liu, Yuqian and Zhang, Guofeng},
  booktitle={2022 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
  pages={499--507},
  year={2022},
}

Contact

Contact Xingrui Yang and Hai Li for questions, comments and reporting bugs.

About

Code for "Dense Tracking and Mapping with Voxel-based Neural Implicit Representation", ISMAR 2022

Resources

Stars

Watchers

Forks