I am doing a multilabel image annotation using caffe, the label vector type I use is [0 1 01 ], I want use a Pairwise rank loss as the multilabel loss function. I want to know if the pairwise_loss_layer.cpp can be used in my project. Can you give a full code of pairwise_loss_layer.cpp? Thankyou! #5757

Open
wants to merge 6 commits into
from
View
@@ -96,3 +96,8 @@ LOCK
LOG*
CURRENT
MANIFEST-*
+
+# log and script
+nohup.out
+hyperopt_test.py
+train.sh
View
@@ -2,7 +2,8 @@
# Contributions simplifying and improving our build system are welcome!
# cuDNN acceleration switch (uncomment to build with cuDNN).
-# USE_CUDNN := 1
+USE_CUDNN := 1
+CUDNN_PATH := /home/libs/cuda-5.0
# CPU-only switch (uncomment to build without GPU support).
# CPU_ONLY := 1
@@ -25,7 +26,7 @@
# CUSTOM_CXX := g++
# CUDA directory contains bin/ and lib/ directories that we need.
-CUDA_DIR := /usr/local/cuda
+CUDA_DIR := /usr/local/cuda-7.5
# On Ubuntu 14.04, if cuda tools are installed via
# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
# CUDA_DIR := /usr
@@ -43,12 +44,12 @@ CUDA_ARCH := -gencode arch=compute_20,code=sm_20 \
# atlas for ATLAS (default)
# mkl for MKL
# open for OpenBlas
-BLAS := atlas
+BLAS := open
# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
# Leave commented to accept the defaults for your choice of BLAS
# (which should work)!
-# BLAS_INCLUDE := /path/to/your/blas
-# BLAS_LIB := /path/to/your/blas
+BLAS_INCLUDE := /opt/OpenBLAS/include
+BLAS_LIB := /opt/OpenBLAS/lib
# Homebrew puts openblas in a directory that is not on the standard search path
# BLAS_INCLUDE := $(shell brew --prefix openblas)/include
@@ -87,8 +88,8 @@ PYTHON_LIB := /usr/lib
# WITH_PYTHON_LAYER := 1
# Whatever else you find you need goes here.
-INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include
-LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib
+INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include $(CUDNN_PATH)/include
+LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib $(CUDNN_PATH)/lib64
# If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
# INCLUDE_DIRS += $(shell brew --prefix)/include
@@ -98,7 +99,6 @@ LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1
-# N.B. both build and distribute dirs are cleared on `make clean`
BUILD_DIR := build
DISTRIBUTE_DIR := distribute
View
@@ -1,37 +1,47 @@
-# Caffe
-
-[![Build Status](https://travis-ci.org/BVLC/caffe.svg?branch=master)](https://travis-ci.org/BVLC/caffe)
-[![License](https://img.shields.io/badge/license-BSD-blue.svg)](LICENSE)
-
-Caffe is a deep learning framework made with expression, speed, and modularity in mind.
-It is developed by the Berkeley Vision and Learning Center ([BVLC](http://bvlc.eecs.berkeley.edu)) and community contributors.
-
-Check out the [project site](http://caffe.berkeleyvision.org) for all the details like
-
-- [DIY Deep Learning for Vision with Caffe](https://docs.google.com/presentation/d/1UeKXVgRvvxg9OUdh_UiC5G71UMscNPlvArsWER41PsU/edit#slide=id.p)
-- [Tutorial Documentation](http://caffe.berkeleyvision.org/tutorial/)
-- [BVLC reference models](http://caffe.berkeleyvision.org/model_zoo.html) and the [community model zoo](https://github.com/BVLC/caffe/wiki/Model-Zoo)
-- [Installation instructions](http://caffe.berkeleyvision.org/installation.html)
-
-and step-by-step examples.
-
-[![Join the chat at https://gitter.im/BVLC/caffe](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/BVLC/caffe?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
-
-Please join the [caffe-users group](https://groups.google.com/forum/#!forum/caffe-users) or [gitter chat](https://gitter.im/BVLC/caffe) to ask questions and talk about methods and models.
-Framework development discussions and thorough bug reports are collected on [Issues](https://github.com/BVLC/caffe/issues).
-
-Happy brewing!
-
-## License and Citation
-
-Caffe is released under the [BSD 2-Clause license](https://github.com/BVLC/caffe/blob/master/LICENSE).
-The BVLC reference models are released for unrestricted use.
-
-Please cite Caffe in your publications if it helps your research:
-
- @article{jia2014caffe,
- Author = {Jia, Yangqing and Shelhamer, Evan and Donahue, Jeff and Karayev, Sergey and Long, Jonathan and Girshick, Ross and Guadarrama, Sergio and Darrell, Trevor},
- Journal = {arXiv preprint arXiv:1408.5093},
- Title = {Caffe: Convolutional Architecture for Fast Feature Embedding},
- Year = {2014}
+# hash-caffe
+
+This is a caffe repository for learning to hash. We fork the repository from [Caffe](https://github.com/BVLC/caffe) and make our modifications. The main modifications are listed as follow:
+
+- Add `multi label layer` which enable ImageDataLayer to process multi-label dataset.
+- Add `pairwise loss layer` and `quantization loss layer` described in paper "Deep Hashing Network for Efficient Similarity Retrieval".
+
+Data Preparation
+---------------
+In `data/nus_wide/train.txt`, we give an example to show how to prepare training data. In `data/nus_wide/parallel/`, the list of testing and database images are splitted to 12 parts, which could be processed parallelly when predicting.
+
+Training Model
+---------------
+
+In `models/DHN/nus_wide/`, we give an example to show how to train hash model. In this model, we use pairwise loss and quantization loss as loss functions.
+
+The [bvlc\_reference\_caffenet](http://dl.caffe.berkeleyvision.org/bvlc_reference_caffenet.caffemodel) is used as the pre-trained model. If the NUS\_WIDE dataset and pre-trained caffemodel is prepared, the example can be run with the following command:
+```
+"./build/tools/caffe train -solver models/DHN/nus_wide/solver.prototxt -weights models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel"
+```
+
+Parameter Tuning
+---------------
+In pairwise loss layer and quantization loss layer, parameter `loss_weight` can be tuned to give them different weights.
+
+Predicting
+---------------
+In `models/DHN/predict/predict_parallel.py`, we give an example to show how to evaluate the trained hash model.
+
+Citation
+---------------
+ @inproceedings{DBLP:conf/aaai/ZhuL0C16,
+ author = {Han Zhu and
+ Mingsheng Long and
+ Jianmin Wang and
+ Yue Cao},
+ title = {Deep Hashing Network for Efficient Similarity Retrieval},
+ booktitle = {Proceedings of the Thirtieth {AAAI} Conference on Artificial Intelligence,
+ February 12-17, 2016, Phoenix, Arizona, {USA.}},
+ pages = {2415--2421},
+ year = {2016},
+ crossref = {DBLP:conf/aaai/2016},
+ url = {http://www.aaai.org/ocs/index.php/AAAI/AAAI16/paper/view/12039},
+ timestamp = {Thu, 21 Apr 2016 19:28:00 +0200},
+ biburl = {http://dblp.uni-trier.de/rec/bib/conf/aaai/ZhuL0C16},
+ bibsource = {dblp computer science bibliography, http://dblp.org}
}
Oops, something went wrong.