Skip to content

Commit

Permalink
update readme.md
Browse files Browse the repository at this point in the history
  • Loading branch information
chenbinghui1 committed Apr 3, 2019
1 parent 72f7f49 commit 341c671
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 6 deletions.
6 changes: 3 additions & 3 deletions README.md
Expand Up @@ -49,9 +49,9 @@ This code is developed based on [Caffe](https://github.com/BVLC/caffe/).
* 2GPUs, each > 11G
### Train_Model
1. The Installation is completely the same as [Caffe](http://caffe.berkeleyvision.org/). Please follow the [installation instructions](http://caffe.berkeleyvision.org/installation.html). Make sure you have correctly installed before using our code.
2. Download the training images **CUB**[google drive](https://drive.google.com/open?id=1V_5tS4YgyMRxUM7QHINn7aRYizwjxwmC), [baidu](https://pan.baidu.com/s/1X4W1xucDBxZafITvPF8SXQ)(psw:w3vh) and move it to $(your_path). The images are preprossed the same as [Lifted Loss](https://github.com/rksltnl/Deep-Metric-Learning-CVPR16/), i.e. with zero paddings.
3. Download the **training list**(400M)[google drive](https://drive.google.com/open?id=1P2lUicV-nMchMU_aP6JbgzOibOG1o-F6), [baidu](https://pan.baidu.com/s/1-NH4rpkYwbLjR0tIkr30nA) (psw:xbct), and move it to folder "~/Hybrid-Attention-based-Decoupled-Metric-Learning-master/examples/CUB/". (Or you can create your own list by randomly selecting 65 classes with 2 samples each class.)
4. Download [googlenetV1](http://dl.caffe.berkeleyvision.org/bvlc_googlenet.caffemodel) (used for U512) and **3nets**[google drive](https://drive.google.com/open?id=1boQISUyXaV77qCS0u5Nmlv6dckuN25mM), [baidu](https://pan.baidu.com/s/10Q1wPeHMYtXEJ5cqY9GgPg)(psw:dmev) (used for DeML3-3_512) to folder "~/Hybrid-Attention-based-Decoupled-Metric-Learning-master/examples/CUB/pre-trained-model/". (each stream of 3nets model is intialized by the same googlenetV1 model)
2. Download the training images **CUB** {[google drive](https://drive.google.com/open?id=1V_5tS4YgyMRxUM7QHINn7aRYizwjxwmC), [baidu](https://pan.baidu.com/s/1X4W1xucDBxZafITvPF8SXQ)(psw:w3vh)} and move it to $(your_path). The images are preprossed the same as [Lifted Loss](https://github.com/rksltnl/Deep-Metric-Learning-CVPR16/), i.e. with zero paddings.
3. Download the **training list**(400M) {[google drive](https://drive.google.com/open?id=1P2lUicV-nMchMU_aP6JbgzOibOG1o-F6), [baidu](https://pan.baidu.com/s/1-NH4rpkYwbLjR0tIkr30nA) (psw:xbct)}, and move it to folder "~/Hybrid-Attention-based-Decoupled-Metric-Learning-master/examples/CUB/". (Or you can create your own list by randomly selecting 65 classes with 2 samples each class.)
4. Download [googlenetV1](http://dl.caffe.berkeleyvision.org/bvlc_googlenet.caffemodel) (used for U512) and **3nets** {[google drive](https://drive.google.com/open?id=1boQISUyXaV77qCS0u5Nmlv6dckuN25mM), [baidu](https://pan.baidu.com/s/10Q1wPeHMYtXEJ5cqY9GgPg)(psw:dmev)} (used for DeML3-3_512) to folder "~/Hybrid-Attention-based-Decoupled-Metric-Learning-master/examples/CUB/pre-trained-model/". (each stream of 3nets model is intialized by the same googlenetV1 model)
5. Modify the images path by changing "root_folder" into $(your_path) in all *.prototxt .
6. Then you can train our baseline method **U512** and the proposed **DeML(I=3,J=3)** by running
```
Expand Down
6 changes: 3 additions & 3 deletions README.md~
Expand Up @@ -49,9 +49,9 @@ This code is developed based on [Caffe](https://github.com/BVLC/caffe/).
* 2GPUs, each > 11G
### Train_Model
1. The Installation is completely the same as [Caffe](http://caffe.berkeleyvision.org/). Please follow the [installation instructions](http://caffe.berkeleyvision.org/installation.html). Make sure you have correctly installed before using our code.
2. Download the training images **CUB**[google drive](https://drive.google.com/open?id=1V_5tS4YgyMRxUM7QHINn7aRYizwjxwmC)[baidu](https://pan.baidu.com/s/1X4W1xucDBxZafITvPF8SXQ)(psw:w3vh) and move it to $(your_path). The images are preprossed the same as [Lifted Loss](https://github.com/rksltnl/Deep-Metric-Learning-CVPR16/), i.e. with zero paddings.
3. Download the **training list**(400M)[google drive](https://drive.google.com/open?id=1P2lUicV-nMchMU_aP6JbgzOibOG1o-F6)[baidu](https://pan.baidu.com/s/1-NH4rpkYwbLjR0tIkr30nA) (psw:xbct), and move it to folder "~/Hybrid-Attention-based-Decoupled-Metric-Learning-master/examples/CUB/". (Or you can create your own list by randomly selecting 65 classes with 2 samples each class.)
4. Download [googlenetV1](http://dl.caffe.berkeleyvision.org/bvlc_googlenet.caffemodel) (used for U512) and **3nets**[google drive](https://drive.google.com/open?id=1boQISUyXaV77qCS0u5Nmlv6dckuN25mM)[baidu](https://pan.baidu.com/s/10Q1wPeHMYtXEJ5cqY9GgPg)(psw:dmev) (used for DeML3-3_512) to folder "~/Hybrid-Attention-based-Decoupled-Metric-Learning-master/examples/CUB/pre-trained-model/". (each stream of 3nets model is intialized by the same googlenetV1 model)
2. Download the training images **CUB**[google drive](https://drive.google.com/open?id=1V_5tS4YgyMRxUM7QHINn7aRYizwjxwmC), [baidu](https://pan.baidu.com/s/1X4W1xucDBxZafITvPF8SXQ)(psw:w3vh) and move it to $(your_path). The images are preprossed the same as [Lifted Loss](https://github.com/rksltnl/Deep-Metric-Learning-CVPR16/), i.e. with zero paddings.
3. Download the **training list**(400M)[google drive](https://drive.google.com/open?id=1P2lUicV-nMchMU_aP6JbgzOibOG1o-F6), [baidu](https://pan.baidu.com/s/1-NH4rpkYwbLjR0tIkr30nA) (psw:xbct), and move it to folder "~/Hybrid-Attention-based-Decoupled-Metric-Learning-master/examples/CUB/". (Or you can create your own list by randomly selecting 65 classes with 2 samples each class.)
4. Download [googlenetV1](http://dl.caffe.berkeleyvision.org/bvlc_googlenet.caffemodel) (used for U512) and **3nets**[google drive](https://drive.google.com/open?id=1boQISUyXaV77qCS0u5Nmlv6dckuN25mM), [baidu](https://pan.baidu.com/s/10Q1wPeHMYtXEJ5cqY9GgPg)(psw:dmev) (used for DeML3-3_512) to folder "~/Hybrid-Attention-based-Decoupled-Metric-Learning-master/examples/CUB/pre-trained-model/". (each stream of 3nets model is intialized by the same googlenetV1 model)
5. Modify the images path by changing "root_folder" into $(your_path) in all *.prototxt .
6. Then you can train our baseline method **U512** and the proposed **DeML(I=3,J=3)** by running
```
Expand Down

0 comments on commit 341c671

Please sign in to comment.