PATS: Patch Area Transportation with Subdivision for Local Feature Matching
Junjie Ni* Yijin Li*, Zhaoyang Huang, Hongsheng Li, Hujun Bao, Zhaopeng Cui, Guofeng Zhang
CVPR 2023
- Training script
We provide the download link to
- Pretrained models trained on MegaDepth and ScanNet, which are labeled as outdoor and indoor, respectively.
- MegaDepth pairs and scenes (placed in a folder named megadepth_parameters).
- The demo data, which is a sequence of images captured from near to far.
conda env create -f environment.yaml
cd setup
python setup.py install
cd ..
Download from the above link, and place the data and model weights as below:
pats
├── data
│ ├── MegaDepth_v1
│ ├── megadepth_parameters
│ ├── ScanNet
│ ├── yfcc100M
│ └── demo
└── weights
├── indoor_coarse.pt
├── indoor_fine.pt
├── indoor_third.pt
├── outdoor_coarse.pt
├── outdoor_fine.pt
└── outdoor_third.pt
python evaluate.py configs/test_megadepth.yaml
python evaluate.py configs/test_yfcc.yaml
python evaluate.py configs/test_scannet.yaml
python demo.py configs/test_demo.yaml
If you find this code useful for your research, please use the following BibTeX entry.
@inproceedings{pats2023,
title={PATS: Patch Area Transportation with Subdivision for Local Feature Matching},
author={Junjie Ni, Yijin Li, Zhaoyang Huang, Hongsheng Li, Hujun Bao, Zhaopeng Cui, Guofeng Zhang},
booktitle={The IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR)},
year={2023}
}
We would like to thank the authors of SuperGlue and LoFTR for open-sourcing their projects.