Code release for "Deep Priority Hashing" (ACMMM 2018)
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
cmake DPH init Jul 14, 2018
data DPH init Jul 14, 2018
docker DPH init Jul 14, 2018
docs DPH init Jul 14, 2018
examples DPH init Jul 14, 2018
include/caffe DPH init Jul 14, 2018
matlab DPH init Jul 14, 2018
models Update predict_vgg.py Jul 27, 2018
python DPH init Jul 14, 2018
scripts DPH init Jul 14, 2018
src DPH init Jul 14, 2018
tools DPH init Jul 14, 2018
.Doxyfile
.gitignore published v1 Jul 14, 2018
.travis.yml DPH init Jul 14, 2018
CMakeLists.txt DPH init Jul 14, 2018
CONTRIBUTING.md DPH init Jul 14, 2018
CONTRIBUTORS.md DPH init Jul 14, 2018
INSTALL.md DPH init Jul 14, 2018
LICENSE DPH init Jul 14, 2018
Makefile DPH init Jul 14, 2018
Makefile.config.example DPH init Jul 14, 2018
README.md Update README.md Jul 14, 2018
caffe.cloc DPH init Jul 14, 2018
train.sh published v1 Jul 14, 2018
train_vgg.sh published v1 Jul 14, 2018

README.md

DPH

Caffe implementation for "Deep Priority Hashing".

Prerequisites

Linux or OSX

NVIDIA GPU + CUDA-7.5 or CUDA-8.0 and corresponding CuDNN

Caffe

Python 2.7

Modification on Caffe

  • Add multi label layer which enable "ImageDataLayer" to process multi-label dataset.
  • Add "PairwiseLoss" layer implementing the weighted pairwise loss described in our paper. The 'sigmoid_param_' in the code is the `` in the adaptive sigmoid function.

Datasets

We use ImageNet, NUS-WIDE and COCO dataset in our experiments. You can download the ImageNet dataset and NUS-WIDE dataset here. As for COCO dataset, we use COCO 2014, which can be downloaded here. And in case of COCO changes in the future, we also provide a download link here on google drive. After downloading, you need to move the imagenet.tar.gz to ./data/imagenet and extract the file there.

mv imagenet.tar.gz ./data/imagenet
cd ./data/imagenet
tar -zxvf imagenet.tar.gz

Also, for NUS-WIDE, you need to move the nus_wide.tar.gz to ./data/nuswide_81 and extract the file there.

mv nus_wide.tar.gz ./data/nuswide_81
cd ./data/nuswide_81
tar -zxvf nus_wide.tar.gz

For COCO dataset, you need to extract both train and val archive for COCO in ./data/coco. If you download from COCO download page,

mv train2014.zip ./data/coco
mv val2014.zip ./data/coco
cd ./data/coco
unzip train2014.zip
unzip val2014.zip

If you use our shared link

mv coco.tar.gz ./data/coco
cd ./data/coco
tar -zxvf coco.tar.gz
unzip train2014.zip
unzip val2014.zip

You can also modify the list file(txt format) in ./data as you like. Each line in the list file follows the following format:

<image path><space><one hot label representation>

Compiling

The compiling process is the same as caffe. You can refer to Caffe installation instructions here.

Training

First, you need to download the AlexNet pre-trained model on ImageNet from here and move it to ./models/bvlc_reference_caffenet.

VGG pre-trained model on ImageNet can be downloaded here.

Then, you can train the model for each dataset using the followling command.

AlexNet
dataset_name = imagenet, nuswide_81 or coco
./build/tools/caffe train -solver models/train/dataset_name/solver.prototxt -weights ./models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel -gpu gpu_id

or

VGG
dataset_name = imagenet, nuswide_81 or coco
./build/tools/caffe train -solver models/train/dataset_name/solver_vgg.prototxt -weights ./models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel -gpu gpu_id

For more instructions about training and parameter setting, see the instructions in the training directory.

Evaluation

You can evaluate the Mean Average Precision(MAP) result on each dataset using the followling command.

dataset_name = imagenet, nuswide_81 or coco
python models/predict/dataset_name/predict_parallel.py --gpu gpu_id --model_path your_caffemodel_path --save_path the_path_to_save_your_hash_code

We provide some trained models for each dataset for each code length in our experiment for evaluation. You can download them here if you want to use them.

If you have generated the hash code by the previous step or by other method and want to test the MAP of the hash code. You can specify the code_path parameter.

dataset_name = imagenet, nuswide_81 or coco
python models/predict/dataset_name/predict_parallel.py --code_path the_path_of_your_hash_code

For more instructions about training and parameter setting, see the instructions in the predicting directory.

Citation

If you use this code for your research, please consider citing:

bibtex under developing

Contact

If you have any problem about our code, feel free to contact caozhangjie14@gmail.com or describe your problem in Issues.