Skip to content

IvanXie416/CMIGNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Cross-Modal Information-Guided Network using Contrastive Learning for Point Cloud Registration (RAL 2023)

PyTorch implementation of the paper: Cross-Modal Information-Guided Network using Contrastive Learning for Point Cloud Registration.

CMIGNet architecture

Environment

Our model is trained with the following environment:

  • Ubuntu 20.04
  • Python 3.8
  • PyTorch 1.8.1 with torchvision 0.9.1 (Cuda 11.1) Other required packages can be found in requirements.txt.

Dataset Preparation

The cross-modal ModelNet40 dataset can be downloaded from Google Drive. You can download and unzip it to the data folder.

The pre-trained models can be downloaded from Google Drive.

Usage

You can see a list of options that can be used to control hyperparameters of the model and experiment settings at the end of main.py. The comments in the file should be enough to understand them.

To train a model:

python main.py 

To test a model:

python test.py --model_path <path_to_model>

Citation

If you find our work useful in your research, please consider citing:

@article{xie2023cross,
  author={Xie, Yifan and Zhu, Jihua and Li, Shiqi and Shi, Pengcheng},
  journal={IEEE Robotics and Automation Letters}, 
  title={Cross-Modal Information-Guided Network Using Contrastive Learning for Point Cloud Registration}, 
  year={2024},
  volume={9},
  number={1},
  pages={103-110},
  doi={10.1109/LRA.2023.3331625}
}

About

Cross-Modal Information-Guided Network using Contrastive Learning for Point Cloud Registration (RAL 2023)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages