Skip to content

KAIST-VICLab/SplineGS

Repository files navigation

[CVPR'25] SplineGS: Robust Motion-Adaptive Spline for Real-Time Dynamic 3D Gaussians from Monocular Video


Jongmin Park1*, Minh-Quan Viet Bui1*, Juan Luis Gonzalez Bello1, Jaeho Moon1, Jihyong Oh2†, Munchurl Kim1†
1KAIST, South Korea, 2Chung-Ang University, South Korea
*Co-first authors (equal contribution), †Co-corresponding authors

GitHub Repo stars

📣 News

Updates

  • May 26, 2025: Code released.
  • February 26, 2025: SplineGS accepted to CVPR 2025 🎉.
  • December 13, 2024: Paper uploaded to arXiv. Check out the manuscript here.(https://arxiv.org/abs/2412.09982).

To-Dos

  • Add DAVIS dataset configurations.
  • Add custom dataset support.
  • Add iPhone dataset configurations.

⚙️ Environmental Setups

Clone the repo and install dependencies:

git clone https://github.com/KAIST-VICLab/SplineGS.git --recursive
cd SplineGS

# install splinegs environment
conda create -n splinegs python=3.7 
conda activate splinegs
export CUDA_HOME=$CONDA_PREFIX
export LD_LIBRARY_PATH=$CONDA_PREFIX/lib

conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.7 -c pytorch -c nvidia
conda install nvidia/label/cuda-11.7.0::cuda
conda install nvidia/label/cuda-11.7.0::cuda-nvcc
conda install nvidia/label/cuda-11.7.0::cuda-runtime
conda install nvidia/label/cuda-11.7.0::cuda-cudart


pip install -e submodules/simple-knn
pip install -e submodules/co-tracker
pip install -r requirements.txt

# install depth environment
conda deactivate
conda create -n unidepth_splinegs python=3.10
conda activate unidepth_splinegs

pip install -r requirements_unidepth.txt
conda install -c conda-forge ld_impl_linux-64
export CUDA_HOME=$CONDA_PREFIX
export LD_LIBRARY_PATH=$CONDA_PREFIX/lib
conda install nvidia/label/cuda-12.1.0::cuda
conda install nvidia/label/cuda-12.1.0::cuda-nvcc
conda install nvidia/label/cuda-12.1.0::cuda-runtime
conda install nvidia/label/cuda-12.1.0::cuda-cudart
conda install nvidia/label/cuda-12.1.0::libcusparse
conda install nvidia/label/cuda-12.1.0::libcublas
cd submodules/UniDepth/unidepth/ops/knn;bash compile.sh;cd ../../../../../
cd submodules/UniDepth/unidepth/ops/extract_patches;bash compile.sh;cd ../../../../../

pip install -e submodules/UniDepth
mkdir -p submodules/mega-sam/Depth-Anything/checkpoints

📁 Data Preparations

Nvidia Dataset

  1. We follow the evaluation setup from RoDynRF. Download the training images here and arrange them as follows:
SplineGS/data/nvidia_rodynrf
    ├── Balloon1
    │   ├── images_2
    │   ├── instance_masks
    │   ├── motion_masks
    │   └── gt
    ├── ...
    └── Umbrella
  1. Download Depth-Anything checkpoint and place it at submodules/mega-sam/Depth-Anything/checkpoints. Generate depth estimation and tracking results for all scenes as:
conda activate unidepth_splinegs
bash gen_depth.sh

conda deactivate
conda activate splinegs
bash gen_tracks.sh
  1. To obtain motion masks, please refer to Shape of Motion. For Nvidia dataset, we provide the precomputed in motion_masks folder

YOUR OWN Dataset

T.B.D

🚀 Get Started

Nvidia Dataset

Training

# check if environment is activated properly
conda activate splinegs

python train.py -s data/nvidia_rodynrf/${SCENE}/ --expname "${EXP_NAME}" --configs arguments/nvidia_rodynrf/${SCENE}.py

Metrics Evaluation

python eval_nvidia.py -s data/nvidia_rodynrf/${SCENE}/ --expname "${EXP_NAME}" --configs arguments/nvidia_rodynrf/${SCENE}.py --checkpoint output/${EXP_NAME}/point_cloud/fine_best

YOUR OWN Dataset

Training

T.B.D

Evaluation

T.B.D

Acknowledgments

  • This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korean Government [Ministry of Science and ICT (Information and Communications Technology)] (Project Number: RS-2022-00144444, Project Title: Deep Learning Based Visual Representational Learning and Rendering of Static and Dynamic Scenes, 100%).

⭐ Citing SplineGS

If you find our repository useful, please consider giving it a star ⭐ and citing our research papers in your work:

@InProceedings{Park_2025_CVPR,
    author    = {Park, Jongmin and Bui, Minh-Quan Viet and Bello, Juan Luis Gonzalez and Moon, Jaeho and Oh, Jihyong and Kim, Munchurl},
    title     = {SplineGS: Robust Motion-Adaptive Spline for Real-Time Dynamic 3D Gaussians from Monocular Video},
    booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
    month     = {June},
    year      = {2025},
    pages     = {26866-26875}
}

📈 Star History

Star History Chart