Efficient Diffusion for Image Retrieval
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
data
tmp
LICENSE
Makefile
README.md
dataset.py
diffusion.py
evaluate.py
knn.py
mat2npy.py
rank.py
slides.pdf

README.md

This is a faster and improved version of diffusion retrieval, inspired by diffusion-retrieval.

Reference:

If you would like to understand further details of our method, this slides may provide some help.

Features

  • All random walk processes are moved to offline, making the online search remarkably fast

  • In contrast to previous works, we achieved better performance by applying late truncation instead of early truncation to the graph

Requirements

  • Install Facebook FAISS by running conda install faiss-cpu -c pytorch

Optional: install the faiss-gpu under the instruction according to your CUDA version

  • Install joblib by running conda install joblib

  • Install tqdm by running conda install tqdm

Parameters

All parameters can be modified in Makefile. You may want to edit DATASET and FEATURE_TYPE to test all combinations of each dataset and each feature type. Another parameter truncation_size is set to 1000 by default, for large datasets like Oxford105k and Paris106k, changing it to 5000 will improve the performance.

Run

  • Run make download to download files needed in experiments;

  • Run make mat2npy to convert .mat files to .npy files;

  • Run make rank to get the results. If you have GPUs, try using commands like CUDA_VISIBLE_DEVICES=0,1 make rank, 0,1 are examples of GPU ids.

Note: on Oxford5k and Paris6k datasets, the truncation_size parameter should be no larger than 1024 when using GPUs according to FAISS's limitation. You can use CPUs instead.

Authors