Skip to content
An algorithm for sparse semantic correspondence between images of different categories
Branch: master
Clone or download
Latest commit 39ce698 May 1, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
algorithms Add k_per_level option Oct 15, 2018
docs Edit readme and webpage Mar 20, 2019
images first commit Aug 14, 2018
models first commit Aug 14, 2018
options Add k_per_level option Oct 15, 2018
util Fix error in MLS util Apr 25, 2019
README.md Update README.md May 1, 2019
main.py Add the fast option Aug 22, 2018
script.sh Add k_per_level option Oct 15, 2018

README.md

Neural Best-Buddies in PyTorch

This is our PyTorch implementation for the Neural-Best Buddies paper.

The code was written by Kfir Aberman and supported by Mingyi Shi.

Neural Best-Buddies: Project | Paper

If you use this code for your research, please cite:

Neural Best-Buddies: Sparse Cross-Domain Correspondence Kfir Aberman, Jing Liao, Mingyi Shi, Dani Lischinski, Baoquan Chen, Daniel Cohen-Or, SIGGRAPH 2018.

Prerequisites

  • Linux or macOS
  • Python 2 or 3
  • CPU or NVIDIA GPU + CUDA CuDNN

Run

  • Run the algorithm (demo example)
#!./script.sh
python3 main.py --datarootA ./images/original_A.png --datarootB ./images/original_B.png --name lion_cat --k_final 10

The option --k_final dictates the final number of returned points. The results will be saved at ../results/. Use --results_dir {directory_path_to_save_result} to specify the results directory.

Output

Sparse correspondence:

  • correspondence_A.txt, correspondence_B.txt
  • correspondence_A_top_k.txt, correspondence_B_top_k.txt

Dense correspondence (densifying based on MLS):

  • BtoA.npy, AtoB.npy

Warped images (aligned to their middle geometry):

  • warp_AtoM.png, warp_BtoM.png

Tips

  • If you are running the algorithm on a bunch of pairs, we recommend to stop it at the second layer to reduce runtime (comes at the expense of accuracy), use the option --fast.
  • If the images are very similar (e.g, two frames extracted from a video), many corresponding points might be found, resulting in long runtime. In this case we suggest to limit the number of corresponding points per level by setting --k_per_level 20 (or any other desired number)

Citation

If you use this code for your research, please cite our paper:

@article{aberman2018neural,
  title={Neural best-buddies: Sparse cross-domain correspondence},
  author={Aberman, Kfir and Liao, Jing and Shi, Mingyi and Lischinski, Dani and Chen, Baoquan and Cohen-Or, Daniel},
  journal={ACM Transactions on Graphics (TOG)},
  volume={37},
  number={4},
  pages={69},
  year={2018},
  publisher={ACM}
}
You can’t perform that action at this time.