Skip to content

Official implementation of the paper "Gaussian in the Wild: 3D Gaussian Splatting for Unconstrained Image Collections"

Notifications You must be signed in to change notification settings

EastbeanZhang/Gaussian-Wild

Repository files navigation

Gaussian in the Wild: 3D Gaussian Splatting for Unconstrained Image Collections

Dongbin Zhang*, Chuming Wang*, Weitao Wang, Peihao Li, Minghan Qin, Haoqian Wang†
(* indicates equal contribution, † means corresponding author)

Webpage | Full Paper | Video

This repository contains the official author's implementation associated with the paper "Gaussian in the Wild: 3D Gaussian Splatting for Unconstrained Image Collections", which can be found here.

Teaser image

Pipeline
Pipeline of GS-W

Cloning the Repository

The repository contains submodules, thus please check it out with

# SSH
git clone git@github.com:EastbeanZhang/Gaussian-Wild.git --recursive

or

# HTTPS
git clone https://github.com/EastbeanZhang/Gaussian-Wild.git --recursive

The components have been tested on Ubuntu Linux 18.04. Instructions for setting up and running each of them are in the below sections.

Datasets preparation

Download the scenes (We use Brandenburg gate, Trevi fountain, and Sacre coeur in our experiments) from Image Matching Challenge PhotoTourism (IMC-PT) 2020 dataset Download the train/test split from NeRF-W and put it under each scene's folder (the same level as the "dense" folder, see more details in the tree structure of each dataset.

The synthetic lego dataset can be downloaded from Nerf_Data.

The tree structure of each dataset


brandenburg_gate/
├── dense/
│   ├── images/
│   ├── sparse/
│   ├── stereo/
│ 
├──brandenburg.tsv


trevi_fountain/
├── dense/
│   ├── images/
│   ├── sparse/
│   ├── stereo/
│ 
├──trevi.tsv


sacre_coeur/
├── dense/
│   ├── images/
│   ├── sparse/
│   ├── stereo/
│ 
├──sacre.tsv


lego/
├── train/
├── test/
├── val/
├── transforms_train.json
├── transforms_test.json
├── transforms_val.json

Optimizer

The optimizer uses PyTorch and CUDA extensions in a Python environment to produce trained models.

Hardware Requirements

  • CUDA-ready GPU with Compute Capability 7.0+
  • 24 GB VRAM (to train to paper evaluation quality)

Software Requirements

  • Conda (recommended for easy setup)
  • C++ Compiler for PyTorch extensions (we used VS Code)
  • CUDA SDK 11 for PyTorch extensions (we used 11.8)
  • C++ Compiler and CUDA SDK must be compatible

Setup

Environment Setup

Our default, provided install method is based on Conda package and environment management:

conda env create --file environment.yml
conda activate GS-W

Training

Taking the Sacre Coeur scene as an example, more specific commands are shown in run_train.sh.

# sacre coeur
CUDA_VISIBLE_DEVICES=0 python ./train.py --source_path /path/to/sacre_coeur/dense/ --scene_name sacre --model_path outputs/sacre/full --eval --resolution 2 --iterations 70000

Render

Render the training and testing results

(This is automatically done after training by default)

# sacre coeur
CUDA_VISIBLE_DEVICES=0 python ./render.py  --model_path outputs/sacre/full

Rendering a multi-view video demo

# sacre coeur
CUDA_VISIBLE_DEVICES=0 python ./render.py  --model_path outputs/sacre/full --skip_train --skip_test --render_multiview_vedio

Rendering an appearance tuning demo

# sacre coeur
CUDA_VISIBLE_DEVICES=0 python ./render.py  --model_path outputs/sacre/full --skip_train --skip_test --render_interpolate

Evaluation

(This is automatically done after training by default)

Similar to NeRF-W, Ha-NeRF, CR-NeRF, We evaluate the metrics of the right half image to compare with them.

# sacre coeur
CUDA_VISIBLE_DEVICES=0 python ./metrics_half.py -model_path --model_path outputs/sacre/full

If desired, it can also be evaluated on the whole image.

# sacre coeur
CUDA_VISIBLE_DEVICES=0 python ./metrics.py -model_path --model_path outputs/sacre/full

BibTeX

@article{zhang2024gaussian,
  title={Gaussian in the Wild: 3D Gaussian Splatting for Unconstrained Image Collections},
  author={Zhang, Dongbin and Wang, Chuming and Wang, Weitao and Li, Peihao and Qin, Minghan and Wang, Haoqian},
  journal={arXiv preprint arXiv:2403.15704},
  year={2024}
}

Acknowledge

Our code is based on the awesome Pytorch implementation of 3D Gaussian Splatting (3DGS). We appreciate all the contributors.

About

Official implementation of the paper "Gaussian in the Wild: 3D Gaussian Splatting for Unconstrained Image Collections"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages