Skip to content

GuiyuZhao/VRHCF

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

VRHCF: Cross-Source Point Cloud Registration via Voxel Representation and Hierarchical Correspondence Filtering

VRHCF: Cross-Source Point Cloud Registration via Voxel Representation and Hierarchical Correspondence Filtering (ArXiv version, accepted by ICME).

Guiyu Zhao, Zewen Du, Zhentao Guo, Hongbin Ma.

1. Teaser: cross-source point cloud registration

2. Method overview

2. Getting started

(1) Setup

This code has been tested with Python 3.9, Pytorch 1.11.0, CUDA 11.1 on Ubuntu 20.04.

  • Clone the repository
git clone https://github.com/GuiyuZhao/VRHCF && cd VRHCF
  • Setup conda virtual environment
conda create -n VRHCF python=3.9
source activate VRHCF
conda install pytorch==1.11.0 torchvision==0.12.0 cudatoolkit=11.3 -c pytorch
conda install -c open3d-admin open3d==0.11.1
pip install "git+git://github.com/erikwijmans/Pointnet2_PyTorch.git#egg=pointnet2_ops&subdirectory=pointnet2_ops_lib"
  • Prepare the datasets

You can download the 3DCSR dataset from 3DCSR benchmark, and download processed 3DMatch dataset from Baidu Yun (Verification code:6nkf).

Then, the data is organised as follows:

--data--3DMatch--fragments
              |--intermediate-files-real
              |--keypoints
              |--patches
              
--data--3DCSR--kinect_lidar
            |--kinect_sfm
            

The pretrained models and features can be downloaded from releases

(2) 3DMatch

Follow the SpinNet/SphereNet to download and place the 3DMatch dataset.

Train

Train SphereNet on the 3DMatch dataset:

cd ./ThreeDMatch/Train
python train.py

Test

Extract the descriptors by using our pre-trained model:

cd ./ThreeDMatch/Test
python preparation.py

The learned descriptors will be saved in ThreeDMatch/Test/SphereNet_{timestr}/ folder.

Evaluate

The extracted feature descriptors are used for feature matching and correspondence filtering to complete point cloud registration. Evaluate our method by running:

python eval_3DMatch.py [timestr] [samplings]

samplings is the number of keypoints. You can estimate the pose in any sampling numbers.

(3) 3DLoMatch

Test

Extract the descriptors by using our pre-trained model:

cd ./ThreeDMatch/Test
python preparation.py

The learned descriptors will be saved in ThreeDMatch/Test/SphereNet_{timestr}/ folder.

Evaluate

The extracted feature descriptors are used for feature matching and correspondence filtering to complete point cloud registration. Evaluate our method by running:

python eval_3DMatch.py [timestr] [samplings]

samplings is the number of keypoints. You can estimate the pose in any sampling numbers.

(4) 3DCSR

kinect-lidar

Evaluate on kinect-lidar by using our pre-trained model:

python eval_3DCSR_lidar.py [samplings]

kinect-lidar

Evaluate on kinect-lidar by using our pre-trained model:

python eval_3DCSR_sfm.py [samplings]

Results

(1) 3DMatch

Method RR(%) RE(°) TE(cm) FMR(%) IR(%)
GeoTransformer 92.00 1.62 5.30 97.90 71.90
VRHCF 96.21 1.81 6.02 97.35 87.76

(2) 3DCSR

Benchmark RR(%) RE(°) TE(cm) FMR(%) IR(%)
Kinect-sfm 93.8 90.3 96.8 2.06 0.06
Kinect-lidar 10.3 6.4 13.6 3.41 0.13

Acknowledgement

In this project, we use parts of the implementations of the following works:

Updates

  • 01/18/2024: The code is released!

About

[ICME 2024] VRHCF: Cross-Source Point Cloud Registration via Voxel Representation and Hierarchical Correspondence Filtering

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published