Skip to content

PyTorch implementation for our CVPR 2023 paper SE-ORNet: Self-Ensembling Orientation-aware Network for Unsupervised Point Cloud Shape Correspondence

License

Notifications You must be signed in to change notification settings

OpenSpaceAI/SE-ORNet

Repository files navigation

SE-ORNet: Self-Ensembling Orientation-aware Network for Unsupervised Point Cloud Shape Correspondence

PyTorch implementation for our CVPR 2023 paper SE-ORNet.

PWC

PyTorch Lightning

Image 1 Image 2 Image 3 Image 4
Image 5 Image 6 Image 7 Image 8

[Project Webpage] [Paper]

News

  • 28. February 2023: SE-ORNet is accepted at CVPR 2023. 🔥
  • 10. April 2023: SE-ORNet preprint released on arXiv.
  • Coming Soon: Code will be released soon.

Installation

  1. Create a virtual environment via conda.

     conda create -n se_ornet python=3.10 -y
     conda activate se_ornet
  2. Install torch and torchvision.

    conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch -y
  3. Install environments

    sh setup.sh

Code structure

├── SE-ORNet
│   ├── __init__.py
│   ├── train.py     <- the main file
│   ├── models
│   │   ├── metrics       
│   │   ├── modules       
│   │   ├── runners    
│   │   ├── correspondence_utils.py  
│   │   ├── data_augment_utils.py
│   │   └── shape_corr_trainer.py   
│   ├── utils
│   │   ├── __init__.py      
│   │   ├── argparse_init.py   
│   │   ├── cyclic_scheduler.py   
│   │   ├── model_checkpoint_utils.py
│   │   ├── pytorch_lightning_utils.py
│   │   ├── switch_functions.py
│   │   ├── tensor_utils.py
│   │   └── warmup.py
│   ├── visualization
│   │   ├── __init__.py
│   │   ├── mesh_container.py
│   │   ├── mesh_visualization_utils.py
│   │   ├── mesh_visualizer.py
│   │   ├── orca_xvfb.bash
│   │   └── visualize_api.py    
│   └── ChamferDistancePytorch
├── data
│   ├── point_cloud_db
│   ├── __init__.py
│   └── generate_smal.md
├── .gitignore
├── .gitmodules
├── README.md
└── LICENSE

Dependencies

The main dependencies of the project are the following:

python: 3.10
cuda: 11.3
pytorch: 1.12.1

Datasets

The method was evaluated on:

  • SURREAL

    • 230k shapes (DPC uses the first 2k).
    • Dataset website
    • This code downloads and preprocesses SURREAL automatically.
  • SHREC’19

    • 44 Human scans.
    • Dataset website
    • This code downloads and preprocesses SURREAL automatically.
  • SMAL

    • 10000 animal models (2000 models per animal, 5 animals).
    • Dataset website
    • Due to licencing concerns, you should register to SMAL and download the dataset.
    • You should follow data/generate_smal.md after downloading the dataset.
    • To ease the usage of this benchmark, the processed dataset can be downloaded from here. Please extract and put under data/datasets/smal
  • TOSCA

    • 41 Animal figures.
    • Dataset website
    • This code downloads and preprocesses TOSCA automatically.
    • To ease the usage of this benchmark, the processed dataset can be downloaded from here. Please extract and put under data/datasets/tosca

Models

The metrics are obtained in 5 training runs followed by 5 test runs. We report both the best and the average values (the latter are given in round brackets).

Human Datasets

Dataset mAP@0.25 mAP@0.5 Download
SHREC’19 17.5 (16.8) 5.1 (5.6) model
SURREAL 22.3 (21.3) 4.5 (4.8) model

Animal Datasets

Dataset mAP@0.25 mAP@0.5 Download
TOSCA 40.8 (38.1) 2.7 (2.8) model
SMAL 38.3 (36.2) 3.3 (3.8) model

Training & inference

For training run

python train.py --dataset_name <surreal/tosca/shrec/smal>

The code is based on PyTorch-Lightning, all PL hyperparameters are supported.

For testing, simply add --do_train false flag, followed by --resume_from_checkpoint with the relevant checkpoint.

python train.py --do_train false  --resume_from_checkpoint <path>

Test phase visualizes each sample, for faster inference pass --show_vis false.

We provide a trained checkpoint repreducing the results provided in the paper, to test and visualize the model run

python train.py --show_vis --do_train false --resume_from_checkpoint data/ckpts/surreal_ckpt.ckpt

BibTeX

If you like our work and use the codebase or models for your research, please cite our work as follows.

@inproceedings{
Deng2023seornet,
title={{SE}-{ORN}et: Self-Ensembling Orientation-aware Network for Unsupervised Point Cloud Shape Correspondence},
author={Jiacheng Deng, ChuXin Wang, Jiahao Lu, Jianfeng He, Tianzhu Zhang, Jiyang Yu, Zhe Zhang},
booktitle={Conference on Computer Vision and Pattern Recognition 2023},
year={2023},
url={https://openreview.net/forum?id=DS6AyDWnAv}
}

About

PyTorch implementation for our CVPR 2023 paper SE-ORNet: Self-Ensembling Orientation-aware Network for Unsupervised Point Cloud Shape Correspondence

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages