Relightable Large Reconstruction from Multi-View Images
Project Page Model (Hugging Face) Code
ReLi3D is an inference-only release for relightable 3D asset reconstruction.
Given a set of object images and camera poses (transforms.json + RGBA frames), ReLi3D predicts a UV-unwrapped textured mesh with material attributes and estimated illumination.
Training and experiment pipelines are intentionally excluded from this repository.
demos/reli3d/infer_from_transforms.py: Main inference entrypoint.configs/reli3d/inference.yaml: Runtime system config.scripts/download_model_from_hf.py: Explicit model artifact downloader.src/: Runtime model components used for inference.native/: Native extensions (uv_unwrapper,texture_baker).artifacts/model/: Local model artifacts (config.yaml,reli3d_final.ckpt).
git clone https://github.com/Stability-AI/ReLi3D.git
cd ReLi3D
python3.10 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
pip install ./native/uv_unwrapper ./native/texture_bakerDefault model source: StabilityLabs/ReLi3D on Hugging Face.
If the model repo is private or gated, authenticate first:
huggingface-cli login
git config --global credential.helper storeDownload artifacts explicitly:
python scripts/download_model_from_hf.py \
--repo-id StabilityLabs/ReLi3D \
--output-dir artifacts/modelSelective download:
# Only config
python scripts/download_model_from_hf.py --skip-checkpoint
# Only checkpoint
python scripts/download_model_from_hf.py --skip-configdemos/reli3d/infer_from_transforms.py also auto-downloads missing default artifacts during inference.
Resolution order for checkpoint:
--checkpointRELI3D_CHECKPOINTartifacts/model/reli3d_final.ckpt
Default config path:
artifacts/model/config.yaml(falls back toartifacts/model/raw.yamlif present)
Each object under --input-root must contain:
transforms.jsonrgba/<view>.png
Example:
demo_files/objects/
Camera_01/
transforms.json
rgba/
0000.png
0010.png
0021.png
0031.png
Expected transforms.json frame keys:
file_pathtransform_matrix(orcamera_transform)camera_fov- optional
camera_principal_point
python demos/reli3d/infer_from_transforms.py \
--input-root demo_files/objects \
--objects Camera_01 \
--output-root outputs \
--num-views 4 \
--texture-size 256 \
--overwriteIf you want strict parity with your exact final run setup, pass explicit artifact paths:
python demos/reli3d/infer_from_transforms.py \
--input-root demo_files/objects \
--objects Camera_01 \
--config /path/to/raw.yaml \
--checkpoint /path/to/epoch=0-step=80000.ckpt \
--output-root outputs \
--overwritePer object output directory:
outputs/<object>/mesh.glboutputs/<object>/illumination.hdr(if predicted)outputs/<object>/run_info.json
run_info.json stores repo-relative or filename-only paths to avoid leaking internal absolute filesystem paths.
This code and model usage are subject to Stability AI Community License terms.
For individuals or organizations generating annual revenue of USD 1,000,000 (or local currency equivalent) or more, commercial usage requires an enterprise license from Stability AI.
- License details: https://stability.ai/license
- Enterprise request: https://stability.ai/enterprise
@inproceeding{ dihlmann2026reli3d,
author = {Dihlmann, Jan-Niklas and Boss, Mark and Donne, Simon and Engelhardt, Andreas and Lensch, Hendrik P. A. and Jampani, Varun},
title = {ReLi3D: Relightable Multi-view 3D Reconstruction with Disentangled Illumination},
booktitle = {International Conference on Learning Representations (ICLR)},
year ={2026}
}
