This repository contains synthetic data generation code using Mitsuba3 for the project Generative Perception of Shape and Material from Differential Motion. Project Page
(repo work in progress)
This repository provides the following core functionality:
-
Scene & Material Rendering
- Renders a textured 3D object along with its ground-truth geometry (normals or depth), albedo, and spatially-varying BRDF properties.
- See
render_setup.pyandbrdf_integrator.py.
-
Rotating-Object Video Generation
- Produces a video of the 3D object rotating around a specified axis, including all ground-truth maps.
- Two modes:
- Randomized textures (
.jpg/.png) +.objmesh →datagen_motion.py - Artist-designed textures bound to a specific asset in
.xml→datagen_xml.py
- Randomized textures (
-
Blender-to-Mitsuba Export
- Converts Blender-loaded meshes and textures into Mitsuba3-compatible
.xmlscenes. - See
blender_obj_to_xml.py.
- Converts Blender-loaded meshes and textures into Mitsuba3-compatible
-
Post-Processing
- Tone-maps
.exrrenders and normalizes surface normal maps. - See
video_postprocess.py.
- Tone-maps
-
Install Mitsuba3
Follow the instructions on the Mitsuba3 GitHub to build and install. -
Select rendering variant
In your Python script, choose between CPU or GPU acceleration:import mitsuba as mi # CPU-based (LLVM IR) mi.set_variant('llvm_ad_rgb') # GPU-based (CUDA) mi.set_variant('cuda_ad_rgb')
(TODO)
If you find this repo useful, please consider citing:
@article{han2025generative,
title={Generative Perception of Shape and Material from Differential Motion},
author={Han, Xinran Nicole and Nishino, Ko and Zickler, Todd},
journal={arXiv preprint arXiv:2506.02473},
year={2025}
}We thank the developers of Mitsuba3 for their valuable contributions to the open source community. We also thank Kohei Yamashita on discussing the data generation pipeline.