Skip to content

Geometry-Aware Large Reconstruction Model for Efficient and High-Quality 3D Generation

License

Notifications You must be signed in to change notification settings

alibaba-yuanjing-aigclab/GeoLRM

Repository files navigation

GeoLRM

Project Page | arXiv | Paper | Checkpoint

GeoLRM: Geometry-Aware Large Reconstruction Model for High-Quality 3D Gaussian Generation

Chubin Zhang, Hongliang Song, Yi Wei, Yu Chen, Jiwen Lu, Yansong Tang

Updates:

  • 🔔 2024/6/21 Code release.

TODO:

  • Release training code.
  • Add huggineface demos.
  • Add mesh reconstruction support.

🕹 Demos

3D assets generated by GeoLRM:

demo.mp4

📝 Introduction

In this work, we introduce the Geometry-Aware Large Reconstruction Model (GeoLRM), an approach which can predict high-quality assets with 512k Gaussians and 21 input images in only 11 GB GPU memory. Previous works neglect the inherent sparsity of 3D structure and do not utilize explicit geometric relationships between 3D and 2D images. This limits these methods to a low-resolution representation and makes it difficult to scale up to the dense views for better quality. GeoLRM tackles these issues by incorporating a novel 3D-aware transformer structure that directly processes 3D points and uses deformable cross-attention mechanisms to effectively integrate image features into 3D representations. We implement this solution through a two-stage pipeline: initially, a lightweight proposal network generates a sparse set of 3D anchor points from the posed image inputs; subsequently, a specialized reconstruction transformer refines the geometry and retrieves textural details.

💡 Method

Method Pipeline:

The process begins with the transformation of dense tokens into an occupancy grid via a Proposal Transformer, which captures spatial occupancy from hierarchical image features extracted using a combination of a convolutional layer and DINOv2. Sparse tokens representing occupied voxels are further processed through a Reconstruction Transformer that employs self-attention and deformable cross-attention mechanisms to refine geometry and retrieve texture details with 3D to 2D projection. Finally, the refined 3D tokens are converted into 3D Gaussians for real-time rendering.

🔧 Installation

Clone this repo and install the dependencies:

  1. Create a new conda environment and install the dependencies:

    conda create -n geolrm python=3.10
    conda activate geolrm
    conda install pytorch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 pytorch-cuda=12.1 -c pytorch -c nvidia
    pip install flash-attn --no-build-isolation
    pip install -r requirements.txt
  2. Follow the instructions in generative-models to install the sgm package. (For SV3D inference.)

  3. Build the curopr3d and deform_attn_3d CUDA extensions:

    cd src/models/decoder/curope3d
    python setup.py build_ext --inplace
    cd ../deform_attn_3d
    python setup.py build_ext --inplace

    If you encounter any issues, please make sure that the CUDA version used to compile the pytorch package and the CUDA version of your NVCC compiler are the same, which can be checked by running the following commands:

    nvcc --version
    python -c "import torch; print(torch.version.cuda)"

🚀 Quick Start

Download checkpoints

# Download the GeoLRM checkpoint
wget https://huggingface.co/LinShan/GeoLRM/resolve/main/geolrm.ckpt -P ckpts
# Download the SV3D checkpoint
wget https://huggingface.co/LinShan/GeoLRM/resolve/main/sv3d_p.safetensors -P ckpts

Gradio App

python app.py

Then open the browser and visit http://127.0.0.1:42339/.

Inference

python run_georm_sv3d.py configs/geolrm.yaml examples --output_path outputs

Tips for better results:

  • Use high-resolution images for better results.
  • Orthographic front-facing images lead to good reconstructions.
  • Avoid white objects and overexposed images.

Training

python train.py --base configs/geolrm-train.yaml --gpus 0 --num_nodes 1

🙏 Acknowledgement

Many thanks to these excellent projects:

InstantMesh, RichDreamer, LGM, Zero123++, 3DGS, diff-gaussian-rasterization (with depth), generative-models, BEVFormer

📃 Bibtex

If this work is helpful for your research, please consider citing the following BibTeX entry.

@article{zhang2024geolrm,
  title={GeoLRM: Geometry-Aware Large Reconstruction Model for High-Quality 3D Gaussian Generation},
  author={Chubin Zhang and Hongliang Song and Yi Wei and Yu Chen and Jiwen Lu and Yansong Tang},
  journal={arXiv preprint arXiv:2406.15333},
  year={2024}
}

About

Geometry-Aware Large Reconstruction Model for Efficient and High-Quality 3D Generation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published