Skip to content

JXingZhao/EGNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

EGNet

EGNet:Edge Guidance Network for Salient Object Detection (ICCV 2019)

We use the sal2edge.m to generate the edge label for training.

For training:

  1. Clone this code by git clone https://github.com/JXingZhao/EGNet.git --recursive, assume your source code directory is$EGNet;

  2. Download training data (fsex) (google drive);

  3. Download initial model (8ir7) (google_drive);

  4. Change the image path and intial model path in run.py and dataset.py;

  5. Start to train with python3 run.py --mode train.

For testing:

  1. Download pretrained model (2cf5) (google drive);

  2. Change the test image path in dataset.py

  3. Generate saliency maps for SOD dataset by python3 run.py --mode test --sal_mode s, PASCALS by python3 run.py --mode test --sal_mode p and so on;

  4. Testing code we use is the public open source code. (https://github.com/Andrew-Qibin/SalMetric)

Pretrained models, datasets and results:

| Page | | Training Set (fsex) (google drive) | | Pretrained models (2cf5) | | Saliency maps (54gi) (google drive VGG) (google drive resnet) |

If you think this work is helpful, please cite

@inproceedings{zhao2019EGNet,
 title={EGNet:Edge Guidance Network for Salient Object Detection},
 author={Zhao, Jia-Xing and Liu, Jiang-Jiang and Fan, Deng-Ping and Cao, Yang and Yang, Jufeng and Cheng, Ming-Ming},
 booktitle={The IEEE International Conference on Computer Vision (ICCV)},
 month={Oct},
 year={2019},
}

Other related work

Contrast Prior and Fluid Pyramid Integration for RGBD Salient Object Detection. (CVPR2019) page

About

EGNet:Edge Guidance Network for Salient Object Detection (ICCV 2019)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published