Siting Zhu*, Guangming Wang*, Hermann Blum, Jiuming Liu, Liang Song, Marc Pollefeys, Hesheng Wang
T-PAMI 2025 [Paper] [Project Page]
This repo uses pixi for dependency management (Python 3.14, PyTorch, CUDA 12).
# Install pixi if you don't have it
curl -fsSL https://pixi.sh/install.sh | bash
# Install all dependencies
pixi installNote: The original repo used conda +
environment.yamltargeting Python 3.7 and CUDA 11.3. This version has been updated to use pixi with a modern stack (Python 3.14, CUDA 12+). Theenvironment.yamlis kept for reference but is not used.
- Download the data with semantic annotations in google drive and save the data into the
./data/replicafolder. We only provide a subset of Replica dataset. For all Replica data generation, please refer to directorydata_generation. - Download the pretrained segmentation network in google drive and save it into the
./segfolder (unzipseg/facebookresearch_dinov2_main.zip).
Run SNI-SLAM:
pixi run python -W ignore run.py configs/Replica/room1.yamlThe mesh for evaluation is saved as $OUTPUT_FOLDER/mesh/final_mesh_eval_rec_culled.ply
To test the pipeline without running all frames, add n_img to your config:
# in configs/Replica/room1.yaml
data:
input_folder: data/replica/room_1/
output: output/Replica/room1/test
n_img: 200 # remove this line for the full runYou can also set a coarser meshing resolution to speed up mesh generation:
meshing:
resolution: 0.05 # default is 0.01 — larger = faster but lower qualityIf tracking and mapping already completed but mesh generation failed, you can reload the last checkpoint and regenerate the mesh without reprocessing all frames:
pixi run python -W ignore run.py configs/Replica/room1.yaml --mesh_onlyThis loads the latest checkpoint from $OUTPUT_FOLDER/ckpts/, reconstructs the keyframe data from disk, and runs meshing directly.
To evaluate the average trajectory error. Run the command below with the corresponding config file:
# An example for room1 of Replica
pixi run python src/tools/eval_ate.py configs/Replica/room1.yamlWe follow code for reconstruction evaluation.
For visualizing the results, we recommend to set mesh_freq: 40 in configs/SNI-SLAM.yaml and run SNI-SLAM from scratch.
After SNI-SLAM is trained, run the following command for visualization.
pixi run python visualizer.py configs/Replica/room1.yaml --top_view --save_renderingThe result of the visualization will be saved at output/Replica/room1/vis.mp4. The green trajectory indicates the ground truth trajectory, and the red one is the trajectory of SNI-SLAM.
--output $OUTPUT_FOLDERoutput folder (overwrite the output folder in the config file)--top_viewset the camera to top view. Otherwise, the camera is set to the first frame of the sequence--save_renderingsave rendering video tovis.mp4in the output folder--no_gt_trajdo not show ground truth trajectory
If you find our code or paper useful, please consider citing:
@inproceedings{zhu2024sni,
title={Sni-slam: Semantic neural implicit slam},
author={Zhu, Siting and Wang, Guangming and Blum, Hermann and Liu, Jiuming and Song, Liang and Pollefeys, Marc and Wang, Hesheng},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={21167--21177},
year={2024}
}
@ARTICLE{zhu2025sni,
author={Zhu, Siting and Wang, Guangming and Blum, Hermann and Wang, Zhong and Zhang, Ganlin and Cremers, Daniel and Pollefeys, Marc and Wang, Hesheng},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
title={SNI-SLAM++: Tightly-Coupled Semantic Neural Implicit SLAM},
year={2026},
volume={48},
number={3},
pages={3399-3416}
}