I am doing a multilabel image annotation using caffe, the label vector type I use is [0 1 01 ], I want use a Pairwise rank loss as the multilabel loss function. I want to know if the pairwise_loss_layer.cpp can be used in my project. Can you give a full code of pairwise_loss_layer.cpp? Thankyou! #5757
Open
Commits
Show all changes
6 commits
Select commit
Hold shift + click to select a range
ff9639d
Update README.md
5d0a34b
change .gitignore and make config
zhuhan1236 f8b1b6d
add multi label layer, pairwise loss, quantization loss
zhuhan1236 07786d2
update README.md
zhuhan1236 56f70e8
add nus_wide data and example
zhuhan1236 b0b677b
Update README.md
zhuhan1236
Jump to file or symbol
Failed to load files and symbols.
| @@ -1,37 +1,47 @@ | ||
| -# Caffe | ||
| - | ||
| -[](https://travis-ci.org/BVLC/caffe) | ||
| -[](LICENSE) | ||
| - | ||
| -Caffe is a deep learning framework made with expression, speed, and modularity in mind. | ||
| -It is developed by the Berkeley Vision and Learning Center ([BVLC](http://bvlc.eecs.berkeley.edu)) and community contributors. | ||
| - | ||
| -Check out the [project site](http://caffe.berkeleyvision.org) for all the details like | ||
| - | ||
| -- [DIY Deep Learning for Vision with Caffe](https://docs.google.com/presentation/d/1UeKXVgRvvxg9OUdh_UiC5G71UMscNPlvArsWER41PsU/edit#slide=id.p) | ||
| -- [Tutorial Documentation](http://caffe.berkeleyvision.org/tutorial/) | ||
| -- [BVLC reference models](http://caffe.berkeleyvision.org/model_zoo.html) and the [community model zoo](https://github.com/BVLC/caffe/wiki/Model-Zoo) | ||
| -- [Installation instructions](http://caffe.berkeleyvision.org/installation.html) | ||
| - | ||
| -and step-by-step examples. | ||
| - | ||
| -[](https://gitter.im/BVLC/caffe?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) | ||
| - | ||
| -Please join the [caffe-users group](https://groups.google.com/forum/#!forum/caffe-users) or [gitter chat](https://gitter.im/BVLC/caffe) to ask questions and talk about methods and models. | ||
| -Framework development discussions and thorough bug reports are collected on [Issues](https://github.com/BVLC/caffe/issues). | ||
| - | ||
| -Happy brewing! | ||
| - | ||
| -## License and Citation | ||
| - | ||
| -Caffe is released under the [BSD 2-Clause license](https://github.com/BVLC/caffe/blob/master/LICENSE). | ||
| -The BVLC reference models are released for unrestricted use. | ||
| - | ||
| -Please cite Caffe in your publications if it helps your research: | ||
| - | ||
| - @article{jia2014caffe, | ||
| - Author = {Jia, Yangqing and Shelhamer, Evan and Donahue, Jeff and Karayev, Sergey and Long, Jonathan and Girshick, Ross and Guadarrama, Sergio and Darrell, Trevor}, | ||
| - Journal = {arXiv preprint arXiv:1408.5093}, | ||
| - Title = {Caffe: Convolutional Architecture for Fast Feature Embedding}, | ||
| - Year = {2014} | ||
| +# hash-caffe | ||
| + | ||
| +This is a caffe repository for learning to hash. We fork the repository from [Caffe](https://github.com/BVLC/caffe) and make our modifications. The main modifications are listed as follow: | ||
| + | ||
| +- Add `multi label layer` which enable ImageDataLayer to process multi-label dataset. | ||
| +- Add `pairwise loss layer` and `quantization loss layer` described in paper "Deep Hashing Network for Efficient Similarity Retrieval". | ||
| + | ||
| +Data Preparation | ||
| +--------------- | ||
| +In `data/nus_wide/train.txt`, we give an example to show how to prepare training data. In `data/nus_wide/parallel/`, the list of testing and database images are splitted to 12 parts, which could be processed parallelly when predicting. | ||
| + | ||
| +Training Model | ||
| +--------------- | ||
| + | ||
| +In `models/DHN/nus_wide/`, we give an example to show how to train hash model. In this model, we use pairwise loss and quantization loss as loss functions. | ||
| + | ||
| +The [bvlc\_reference\_caffenet](http://dl.caffe.berkeleyvision.org/bvlc_reference_caffenet.caffemodel) is used as the pre-trained model. If the NUS\_WIDE dataset and pre-trained caffemodel is prepared, the example can be run with the following command: | ||
| +``` | ||
| +"./build/tools/caffe train -solver models/DHN/nus_wide/solver.prototxt -weights models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel" | ||
| +``` | ||
| + | ||
| +Parameter Tuning | ||
| +--------------- | ||
| +In pairwise loss layer and quantization loss layer, parameter `loss_weight` can be tuned to give them different weights. | ||
| + | ||
| +Predicting | ||
| +--------------- | ||
| +In `models/DHN/predict/predict_parallel.py`, we give an example to show how to evaluate the trained hash model. | ||
| + | ||
| +Citation | ||
| +--------------- | ||
| + @inproceedings{DBLP:conf/aaai/ZhuL0C16, | ||
| + author = {Han Zhu and | ||
| + Mingsheng Long and | ||
| + Jianmin Wang and | ||
| + Yue Cao}, | ||
| + title = {Deep Hashing Network for Efficient Similarity Retrieval}, | ||
| + booktitle = {Proceedings of the Thirtieth {AAAI} Conference on Artificial Intelligence, | ||
| + February 12-17, 2016, Phoenix, Arizona, {USA.}}, | ||
| + pages = {2415--2421}, | ||
| + year = {2016}, | ||
| + crossref = {DBLP:conf/aaai/2016}, | ||
| + url = {http://www.aaai.org/ocs/index.php/AAAI/AAAI16/paper/view/12039}, | ||
| + timestamp = {Thu, 21 Apr 2016 19:28:00 +0200}, | ||
| + biburl = {http://dblp.uni-trier.de/rec/bib/conf/aaai/ZhuL0C16}, | ||
| + bibsource = {dblp computer science bibliography, http://dblp.org} | ||
| } |
Oops, something went wrong.