Skip to content
Succinct Interest Points from Unsupervised Inlierness Probability Learning
Python CMake
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
debug Initial commit Aug 16, 2019
doc
plots Initial commit Aug 16, 2019
python/sips2 Initial commit Aug 16, 2019
results Initial commit Aug 16, 2019
.gitignore Initial commit Aug 16, 2019
CMakeLists.txt Initial commit Aug 16, 2019
LICENSE Added license Aug 16, 2019
README.md Added link to pdf Aug 19, 2019
create_csv.py Initial commit Aug 16, 2019
evaluate_prob_pred.py
final_configs.txt
hp_confmats.py Initial commit Aug 16, 2019
infer_folder.py Initial commit Aug 16, 2019
motivate_loss.py Initial commit Aug 16, 2019
package.xml Initial commit Aug 16, 2019
plot_forward_passes.py
plot_pose_accuracy.py Initial commit Aug 16, 2019
plot_prob_pred.py Initial commit Aug 16, 2019
plot_ransac.py Initial commit Aug 16, 2019
plot_results.py Initial commit Aug 16, 2019
plot_val_metrics.py Initial commit Aug 16, 2019
plot_varyk.py Initial commit Aug 16, 2019
print_baseline_commands.py Initial commit Aug 16, 2019
render_matching.py Initial commit Aug 16, 2019
sequence_confmat.py Initial commit Aug 16, 2019
sequence_forward.py Initial commit Aug 16, 2019
setup.py Initial commit Aug 16, 2019
train.py

README.md

SIPs: Succinct Interest Points from Unsupervised Inlierness Probability Learning

render_kitti render_euroc

This is the code for the 2019 3DV paper SIPs: Succinct Interest Points from Unsupervised Inlierness Probability Learning (PDF) by Titus Cieslewski, Kosta Derpanis and Davide Scaramuzza. When using this, please cite:

@InProceedings{Cieslewski19threedv,
  author        = {Titus Cieslewski and Konstantinos G. Derpanis and Davide Scaramuzza},
  title         = {SIPs: Succinct Interest Points
                  from Unsupervised Inlierness Probability Learning},
  booktitle     = {3D Vision (3DV)},
  year          = 2019
}

If you are looking to minimize the amount of data necessary for feature matching, you might also be interested in our related work Matching Features without Descriptors: Implicitly Matched Interest Points.

Supplementary Material

The supplementary material mentioned in the paper can be found at http://rpg.ifi.uzh.ch/datasets/sips2/supp_sips_3dv.zip .

Installation

We recommend working in a virtual environment (also when using ROS/catkin)

pip install --upgrade opencv-contrib-python==3.4.2.16 opencv-python==3.4.2.16 ipython \
    pyquaternion scipy absl-py hickle matplotlib sklearn tensorflow-gpu cachetools

With ROS/catkin

sudo apt install python-catkin-tools
mkdir -p sips_ws/src
cd sips_ws
catkin config --init --mkdirs --extend /opt/ros/<YOUR VERSION> --merge-devel
cd src
git clone git@github.com:catkin/catkin_simple.git
git clone git@github.com:uzh-rpg/sips2_open.git
git clone git@github.com:uzh-rpg/imips_open_deps.git
catkin build
. ../devel/setup.bash

Without ROS/catkin

mkdir sips_ws
cd sips_ws
git clone git@github.com:uzh-rpg/sips2_open.git
git clone git@github.com:uzh-rpg/imips_open_deps.git

Make sure imips_open_deps/rpg_common_py/python, imips_open_deps/rpg_datasets_py/python and sips2_open/python are in your PYTHONPATH.

Get pre-trained weights

Download the weights from http://rpg.ifi.uzh.ch/datasets/sips2/d=10_tds=tmbrc_nms=5_pbs_aug_lk_ol=0.30_best.zip and extract them into python/sips2/checkpoints.

Inference

Infer any image folder

python infer_folder.py --in_dir=INPUT_DIR [--num_test_pts=N] [--out_dir=OUTPUT_DIR] [--ext=.EXTENSION]

--num_test_pts can be specified to extract a given amount of interest points, otherwise a default of 500 points will be extracted. As shown in the paper, much less points (50-100) are required to establish relative pose in typical robotics datasets. If no output directory is provided, it will be $HOME/imips_out/INPUT_DIR. ext can be used to specify image extensions other than .jpg or .png (add the dot).

Test using our data

Follow these instructions to link up KITTI. To speed things up, you can download http://rpg.ifi.uzh.ch/datasets/imips/tracked_indices.zip and extract the contained files to python/sips2/tracked indices (visual overlap precalculation). Then, run:

python render_matching.py --ds=kt --val_best --testing

This will populate results/match_render/d=10_tds=tmbrc... with images like the following:

kt00 275 286

Training

(Re)move the previously downloaded checkpoints. Follow these instructions to link up TUM mono and Robotcars. Then, run:

python train.py

To visualize training progress, you can run:

python plot_val_metrics.py

in parallel. Here is what it should look like after over 60k iterations:

plot_val_metrics

Acknowledgments

This work was supported by the National Centre of Competence in Research (NCCR) Robotics through the Swiss National Science Foundation and the SNSF-ERC Starting Grant. The Titan Xp used for this research was donated by the NVIDIA Corporation. Konstantinos G. Derpanis is supported by a Canadian NSERC Discovery grant. He contributed to this work in his personal capacity as an Associate Professor at Ryerson University.

You can’t perform that action at this time.