Code Release for LF-Net: Learning Local Features from Images
Switch branches/tags
Nothing to show
Clone or download
Permalink
Failed to load latest commit information.
common first commit Jul 30, 2018
datasets first commit Jul 30, 2018
models first commit Jul 30, 2018
notebooks first commit Jul 30, 2018
samples first commit Jul 30, 2018
teasers first commit Jul 30, 2018
.gitignore first commit Jul 30, 2018
det_tools.py first commit Jul 30, 2018
eval_tools.py first commit Jul 30, 2018
inference.py first commit Jul 30, 2018
license.md typo fix and license added Aug 3, 2018
readme.md typo fix and license added Aug 3, 2018
requirements.txt first commit Jul 30, 2018
run_lfnet.py first commit Jul 30, 2018
spatial_transformer.py first commit Jul 30, 2018
train_lfnet.py first commit Jul 30, 2018

readme.md

LF-Net: Learning Local Features from Images

This repository is a tensorflow implementation for Y. Ono, E. Trulls, P. Fua, K.M. Yi, "LF-Net: Learning Local Features from Images". If you use this code in your research, please cite the paper.

comparison

Installation

This code is based on Python3 and tensorflow with CUDA-9.0. For more details on the required libraries, see requirements.txt. You can also easily prepare this by doing

pip install -r requirements.txt

Pretrained models and example dataset

Download the pretrained models and the scare_coeur sequence. Extract them to the current folder so that they fall under release/models/outdoor for example.

For other datasets, we do not plan to release them at the moment. Please do not contact us for explanations on the training phase. We are providing them as is as a reference implementation.

Updates since the arXiv version

The provided pre-trained models are trained with full 360 degree augmentation for orientation. Thus, the results you get from these models are slightly different from the one reported in arXiv. We have further included a consistency term on the orientation assignment.

Running the keypoint extraction demo

To run LF-Net for all images in a given directory, simply type:

python run_lfnet.py --in_dir=images --out_dir=outputs

In addition, you can easily do the 2-view matching demo through notebooks/demo.ipynb .

Training

Training code can be found in train_lfnet.py. We will not provide any support for the training process and datasets. All issues related to this topic will be closed without answers.

Some Examples

Outdoor dataset
Top: LF-Net, Bottom: SIFT
Indoor dataset
Top: LF-Net, Bottom: SIFT
Webcam dataset
Top: LF-Net, Bottom: SIFT
outdoor indoor webcam