Skip to content

Latest commit

 

History

History
37 lines (26 loc) · 2 KB

README.md

File metadata and controls

37 lines (26 loc) · 2 KB

Human 3Diffusion: Realistic Avatar Creation via Explicit 3D Consistent Diffusion Models

Arxiv, 2024

Yuxuan Xue1 , Xianghui Xie1, 2, Riccardo Marin1, Gerard Pons-Moll1, 2

1Real Virtual Human Group @ University of Tübingen & Tübingen AI Center
2Max Planck Institute for Informatics, Saarland Informatics Campus

News 🚩

  • [2024/06/14] Human 3Diffusion paper is available on ArXiv.
  • [2024/06/14] Inference code and model weights is scheduled to be released after CVPR 2024.

Key Insight 🙌

  • 2D foundation models are powerful but output lacks 3D consistency!
  • 3D generative models can reconstruct 3D representation but is poor in generalization!
  • How to combine 2D foundation models with 3D generative models?:
    • they are both diffusion-based generative models => Can be synchronized at each diffusion step
    • 2D foundation model helps 3D generation => provides strong prior informations about 3D shape
    • 3D representation guides 2D diffusion sampling => use rendered output from 3D reconstruction for reverse sampling, where 3D consistency is guaranteed

Citation ✍️

@inproceedings{xue2023human3diffusion,
  title     = {{Human 3Diffusion: Realistic Avatar Creation via Explicit 3D Consistent Diffusion Models}},
  author    = {Xue, Yuxuan and Xie, Xianghui and Marin, Riccardo and Pons-Moll, Gerard.},
  journal   = {Arxiv},
  year      = {2024},
}