Skip to content

PRIS-CV/PolGS_plus

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PolGS++: Physically-Based Polarimetric Gaussian Splatting for Fast Reflective Surface Reconstruction

preprint


arXiv PDF

Environment Setup

This project was tested on Ubuntu 22.04.3 with CUDA 11.8 and Python 3.7.13. The reconstruction process for a single object takes approximately 10 minutes on an RTX 4090 GPU.

1. Clone the Repository

git clone https://github.com/PRIS-CV/PolGS_plus.git
cd PolGS_plus

2. Create and Activate Conda Environment

conda create -n polgs++ python=3.7.13 -y
conda activate polgs++

3. Install Dependencies

Install PyTorch with CUDA support:

pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu116

Install other requirements:

pip install -r requirements.txt

Install custom submodules:

pip install submodules/diff-gaussian-rasterization
pip install submodules/simple-knn
pip install submodules/cubemapencoder

4. Install PyTorch3D

Please install PyTorch3D following the official installation guide.

Data Preparation

We evaluate our method on subsets of RMVP3D and SMVP3D from NeRSP, PANDORA, and PISR datasets. We also provide the preprocessed data for some of the datasets, which can be downloaded from here.

Dataset Configuration

  • SMVP3D: 36 views (12×3 configuration) for training and testing
  • RMVP3D: 31 views (half of the original 61 views) for training and testing
  • PANDORA: All available views for training and testing
  • PISR: 20 views (half of the original 40 views) for training and testing

Directory Structure

The data should be organized as follows:

data/
├── PANDORA/
│   ├── owl/
│   │   ├── train/                     # RGB images
│   │   ├── train_images_stokes/       # Stokes images
│   │   │   ├── 01_s0.hdr             
│   │   │   ├── 01_s0p1.hdr             
│   │   │   └── ...
│   │   ├── train_input_azimuth_maps/  # Mask files
│   │   └── cameras.npz               # Camera parameters
│   └── ...
├── RMVP3D/
│   ├── frog/
│   │   ├── train/                     # RGB images
│   │   ├── s0/                       # Stokes S0 images
│   │   ├── s1/                       # Stokes S1 images  
│   │   ├── s2/                       # Stokes S2 images
│   │   ├── train_input_azimuth_maps/  # Mask files
│   │   └── cameras.npz               # Camera parameters
│   └── ...
├── SMVP3D/
│   ├── snail/
│   │   └── ...
│   └── ...
└── PISR/
    ├── StandingRabbit/
    ├── LyingRabbit/
    └── ...

Using Your Own Data

To test on your own polarimetric data:

  1. Organize your data following the structure above
  2. Create mask files for object segmentation

Training

Prerequisites

Before training, ensure you have:

  1. Prepared your data following the format described in the Data Preparation section
  2. Updated the data paths in train.bash to point to your dataset location

Running Training

To train a scene, execute:

bash train.bash

Acknowledgements

We gratefully acknowledge the following open-source projects that contributed to this work:

  • Gaussian Surfels - For foundational Gaussian surfels implementation
  • 3DGS-DR - For 3D Gaussian Splatting with deferred rendering

Bibtex

@inproceedings{han2025polgs,
  title={PolGS: Polarimetric Gaussian Splatting for Fast Reflective Surface Reconstruction},
  author={Han, Yufei and Tie, Bowen and Guo, Heng and Lyu, Youwei and Li, Si and Shi, Boxin and Jia, Yunpeng and Ma, Zhanyu},
  year = {2025},
  booktitle = ICCV,
}
@misc{han2026polgs++,
  title={PolGS++: Physically-Guided Polarimetric Gaussian Splatting for Fast Reflective Surface Reconstruction}, 
  author={Yufei Han and Chu Zhou and Youwei Lyu and Qi Chen and Si Li and Boxin Shi and Yunpeng Jia and Heng Guo and Zhanyu Ma},
  year={2026},
  archivePrefix={arXiv},
  url={https://arxiv.org/abs/2603.10801}, 
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors