Skip to content
/ IPRF Public

Official Pytorch Implementation of "Intrinsic-Guided Photorealistic Style Transfer for Radiance Fields", ACM Multimedia APP3DV'25

Notifications You must be signed in to change notification settings

OSHMOS/IPRF

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Intrinsic-Guided Photorealistic Style Transfer for Radiance Fields


Project Page Accepted ACM Workshop

The framework of IPRF

Contributions

  • We propose IPRF, a novel photorealistic 3D style transfer framework that leverages Intrinsic Image Decomposition (IID) to move beyond simple color-based transformations while preserving structural consistency.

  • IPRF preserves the physical realism of content by independently optimizing albedo and shading losses, enabling faithful separation of material and illumination properties.

  • We introduce TSI (Tuning-assisted Style Interpolation), a real-time method that enables smooth style transitions and efficient hyperparameter exploration without requiring additional training.

  • Extensive benchmarks demonstrate that IPRF outperforms prior methods in balancing photorealism and style fidelity.

Getting Started

Environment Requirement

Clone the repo:

git clone https://github.com/OSHMOS/IPRF.git
cd IPRF

Install the iprf requirements using conda and pip:

conda create -n iprf python=3.9 -y && conda activate iprf
pip install -r requirements.txt
pip install -e . --verbose

Input Data Preparation

IPRF supports datasets like NeRF-LLFF and ARF-Style data. To quickly test the method, download a sample dataset:

# Place the downloaded data in ```IPRF/data/```.
mkdir data && cd data
gdown 1VNaB0Wy1almoXEq44ipw83SERDHMozQB
unzip data.zip && cd ..

PIE-Net (Intrinsic Image Decomposition Extractor)

PIE-Net for Intrinsic Image Decomposition

Download the Pre-trained PIE-Net Place the downloaded model in IPRF/opt/iid_extractor/ckpt/<pre-trained model> and IPRF/controllable/iid_extractor/ckpt/<pre-trained model>

Test

For accurate reproduction, we recommend GeForce RTX 4090 or higher GPU.

We basically require two GPUs: one for Intrinsic Image Decomposition (24GB VRAM is required) and one for Style Transfer (8GB VRAM is sufficient). If using a single GPU, 32GB VRAM is required.

Style Transfer

Stylize the flower scene:

#. ./try_llff.sh [scene_name] [style_number]
. ./ try_llff.sh flower 14

3D Style Interpolation

For interpolating between albedo and shading, use gradio :

cd IPRF/controllable/
python 3D.py

Citation

Coming Soon!

Acknowledgement

We would like to thank the authors of Plenoxel, ARF-svox2 and PIE-Net for open-sourcing their implementations.

About

Official Pytorch Implementation of "Intrinsic-Guided Photorealistic Style Transfer for Radiance Fields", ACM Multimedia APP3DV'25

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •