Skip to content

Registration-aided 3D Point Cloud Learning for Large-Scale Place Recognition (IROS 2021)

License

Notifications You must be signed in to change notification settings

qiaozhijian/vLPD-Net

Repository files navigation

vLPD-Net: Registration-aided 3D Point Cloud Learning for Large-Scale Place Recognition

Our IROS 2021: Registration-aided 3D Point Cloud Learning for Large-Scale Place Recognition.

Paper Video

Author: Zhijian Qiao1†, Hanjiang Hu1, 2†, Weiang Shi1, Siyuan Chen1, Zhe Liu1, 3, Hesheng Wang1.

Shanghai Jiao Tong University1 & Carnegie Mellon University2 & University of Cambridge3

Overview

Introduction

This is the our IROS 2021 work. The synthetic virtual dataset has been utilized through GTA-V to involve multi-environment traverses without laborious human efforts, while the accurate registration ground truth can be obtained at the same time.

To this end, we propose vLPD-Net, which is a novel registration-aided 3D domain adaptation network for point cloud based place recognition. A structure-aware registration network is proposed to leverage geometry property and co-contextual information between two input point clouds. Along with them, the adversarial training pipeline is implemented to reduce the gap between synthetic and real-world domain.

Citation

If you find this work useful, please cite:

@INPROCEEDINGS{9635878,  author={Qiao, Zhijian and Hu, Hanjiang and Shi, Weiang and Chen, Siyuan and Liu, Zhe and Wang, Hesheng},  booktitle={2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},   title={A Registration-aided Domain Adaptation Network for 3D Point Cloud Based Place Recognition},   year={2021},  volume={},  number={},  pages={1317-1322},  doi={10.1109/IROS51168.2021.9635878}}

Environment and Dependencies

Code was tested using Python 3.8 with PyTorch 1.7.1 and MinkowskiEngine 0.5.0 on Ubuntu 18.04 with CUDA 10.2.

The following Python packages are required:

  • PyTorch (version 1.7)
  • MinkowskiEngine (version 0.5.0)
  • pytorch_metric_learning (version 0.9.94 or above)
  • tensorboard
  • pandas
  • psutil
  • bitarray

Datasets

vLPD-Net is trained on a subset of Oxford RobotCar datasets and synthetic virtual dataset built on GTAV5. Processed Oxford Robotcar datasets are available in PointNetVLAD. To build the synthetic virtual dataset, we use a plugin DeepGTAV and its expansion A-virtual-LiDAR-for-DeepGTAV, which provide the vehicle with cameras and LiDAR. Besides, we use VPilot for better interaction with DeepGTAV. The data collected by the plugins is simply saved in txt files, and then processed and saved with the Oxford Robotcar format.

The structure of directory is like this:

vLPD-Net
└── benchmark_datasets
     ├── GTA5
     └── oxford

To generate pickles, run:

cd generating_queries/ 

# Generate training tuples for the Oxford Dataset
python generate_training_tuples_baseline.py

# Generate evaluation Oxford tuples
python generate_test_sets.py

# In addition, if you also want to experiment on GTAV5, we have open sourced the code for processing this dataset. We hope it will be helpful to you.
# Generate training tuples for the gtav5 Dataset
python generate_training_tuples_baseline_gtav5.py

# Generate evaluation gtav5 tuples
python generate_test_sets_gtav5.py

Training

We train vLPD-Net on one 2080Ti GPU.

To train the network, run:

# To train vLPD-Net model on the Oxford Dataset
export OMP_NUM_THREADS=24;CUDA_VISIBLE_DEVICES=0 python train.py --config ./config/config_baseline.txt --model_config ./config/models/vlpdnet.txt

# To train vLPD-Net model on the GTAV5 Dataset
export OMP_NUM_THREADS=24;CUDA_VISIBLE_DEVICES=0 python train.py --config ./config/config_baseline_gtav5.txt --model_config ./config/models/vlpdnet.txt

For registration on oxford dataset, we transform point cloud by ourself to generate ground truth. The registration training piplines on two datasets are also a bit different. On oxford dataset, we just use EPCOR in inference while we train with EPCOR on GTAV5 based on a pretrained whole-to-whole registration model like VCR-Net. This is due to the fact that registration on Oxford dataset is naturally whole-to-whole.

Pre-trained Models

Pretrained models are available in weights directory

  • vlpdnet-oxford.pth trained on the Oxford Dataset
  • vlpdnet-gtav5.pth trained on the GTAV5 Dataset with domain adaptation
  • vlpdnet-registration.t7 trained on the Oxford Dataset for registration.

Evaluation

To evaluate pretrained models, run the following commands:


# To evaluate the model trained on the Oxford Dataset 
export OMP_NUM_THREADS=24;CUDA_VISIBLE_DEVICES=0 python evaluate.py --config ./config/config_baseline.txt --model_config ./config/models/vlpdnet.txt --weights=./weights/vlpdnet-oxford.pth 

# To evaluate the model trained on the GTAV5 Dataset
export OMP_NUM_THREADS=24;CUDA_VISIBLE_DEVICES=0 python evaluate.py --config ./config/config_baseline.txt --model_config ./config/models/vlpdnet.txt --weights=./weights/vlpdnet-gtav5.pth

# To evaluate the model trained on the Oxford Dataset for registration
export OMP_NUM_THREADS=24;CUDA_VISIBLE_DEVICES=0 python evaluate.py --config ./config/config_baseline.txt --model_config ./config/models/vlpdnet.txt --eval_reg=./weights/vlpdnet-registration.t7

Acknowledgment

MinkLoc3D

VCR-Net

LPD-Net-Pytorch

License

Our code is released under the MIT License (see LICENSE file for details).

About

Registration-aided 3D Point Cloud Learning for Large-Scale Place Recognition (IROS 2021)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published