Skip to content

Source code for Deep Saliency with Encoded Low Level Distance Map and High Level Features, CVPR 2016.

Notifications You must be signed in to change notification settings

gylee1103/SaliencyELD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SaliencyELD

Source code for our CVPR 2016 paper "Deep Saliency with Encoded Low level Distance Map and High Level Features" by Gayoung Lee, Yu-Wing Tai and Junmo Kim. ([ArXiv paper link] (http://arxiv.org/abs/1604.05495))

Image of our model

Acknowledgement : Our code uses various libraries: Caffe, VLfeat, OpenCV and Boost.

Usage

  1. Dependencies 0. OS : Our code is tested on Ubuntu 14.04 0. CMake : Tested on CMake 2.8.12 0. Caffe : Caffe that we used is contained in this repository. 0. VLFeat : Tested on VLFeat 0.9.20 0. OpenCV 3.0 : We used OpenCV 3.0, but the code may work with OpenCV 2.4.X version. 0. g++ : Our code uses openmp and C++11 and was tested with g++ 4.9.2. 0. Boost : Tested on Boost 1.46

  2. Installation 0. Get our pretrained model and VGG16 model. Some paths for caffe models and prototxts are hard-coded in main.cpp. Check them if you download models in the other folder.

     **NOTE: If you cannot download our ELD model from dropbox, please download it from [this Baidu link](http://pan.baidu.com/s/1jI94TAu).**
    
     ```shell
     cd $(PROJECT_ROOT)/models/
     sh get_models.sh
     ```
    
    1. Build Caffe in the project folder using CMake:

      cd $(PROJECT_ROOT)/caffe/
      mkdir build
      cd build/
      cmake ..
      make -j4
    2. Change library paths in $(PROJECT_ROOT)/CMakeLists.txt for your custom environment and build our code:

      cd $(PROJECT_ROOT)
      edit CMakeList.txt
      mkdir build
      cd build/
      cmake ..
      make
    3. Run the executable file which takes one argument for the path of the directory containing test images:

      ./SaliencyELD ../test_images
    4. The results will be generated in the test directory.

Results of datasets used in the paper

visualization

We provide our results of benchmark datasets used in the paper for convenience. Link1 is the link using dropbox and link2 is using baidu.

ASD results (link1) (link2) (ASD dataset site)

ECSSD results (link1) (link2) (ECSSD dataset site)

PASCAL-S results (link1) (link2) (PASCAL-S dataset site)

DUT-OMRON results (link1) (link2) (DUT-OMRON dataset site)

THUR15K results (link1) (link2) (THUR15K dastaset site)

Citing our work

Please kindly cite our work if it helps your research:

@inproceedings{lee2016saliency,
    title = {Deep Saliency with Encoded Low level Distance Map and High Level Features},
    author={Gayoung, Lee and Yu-Wing, Tai and Junmo, Kim},
    booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    year = {2016}
}

About

Source code for Deep Saliency with Encoded Low Level Distance Map and High Level Features, CVPR 2016.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages