Skip to content

xrhan/diffmotion_datagen

Repository files navigation

This repository contains synthetic data generation code using Mitsuba3 for the project Generative Perception of Shape and Material from Differential Motion. Project Page

(repo work in progress)

🍡 Introduction

This repository provides the following core functionality:

  1. Scene & Material Rendering

    • Renders a textured 3D object along with its ground-truth geometry (normals or depth), albedo, and spatially-varying BRDF properties.
    • See render_setup.py and brdf_integrator.py.
  2. Rotating-Object Video Generation

    • Produces a video of the 3D object rotating around a specified axis, including all ground-truth maps.
    • Two modes:
  3. Blender-to-Mitsuba Export

    • Converts Blender-loaded meshes and textures into Mitsuba3-compatible .xml scenes.
    • See blender_obj_to_xml.py.
  4. Post-Processing

🍧 Usage

  1. Install Mitsuba3
    Follow the instructions on the Mitsuba3 GitHub to build and install.

  2. Select rendering variant
    In your Python script, choose between CPU or GPU acceleration:

    import mitsuba as mi
    
    # CPU-based (LLVM IR)
    mi.set_variant('llvm_ad_rgb')
    
    # GPU-based (CUDA)
    mi.set_variant('cuda_ad_rgb')

🧁 Data Generation Examples

(TODO)

🧇 Citation

If you find this repo useful, please consider citing:

@article{han2025generative,
  title={Generative Perception of Shape and Material from Differential Motion},
  author={Han, Xinran Nicole and Nishino, Ko and Zickler, Todd},
  journal={arXiv preprint arXiv:2506.02473},
  year={2025}
}

☕️ Acknowledgement

We thank the developers of Mitsuba3 for their valuable contributions to the open source community. We also thank Kohei Yamashita on discussing the data generation pipeline.

About

Synthetic video data generation for object shape and materials with Mitsuba3

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages