Code and result about APNet(IEEE TETCI)
'APNet: Adversarial-Learning-Assistance and Perceived Importance Fusion Network for All-Day RGB-T Salient Object Detection'
Python 3.7, Pytorch 1.5.0+, Cuda 10.2, TensorboardX 2.1, opencv-python
RGB-T SOD Datasets can be found in: https://github.com/lz118/RGBT-Salient-Object-Detection
Evaluate tools: we use the matlab verison provide by Dengping Fan.
NEW: We provide saliency maps of of all compared methods in paper. baidu 提取码:zust or Google drive.
Test saliency maps in all datasets[predict]: baidu 提取码:vy3r or Google drive.
The pretrained model can be downloaded at[APNet.pth]: baidu 提取码:vy3r or Google drive.
PS: we resize the testing data to the size of 224 * 224 for quicky evaluate[GT for matlab], baidu 提取码:vy3r or Google drive.
@ARTICLE
{9583676, author={Zhou, Wujie and Zhu, Yun and Lei, Jingsheng and Wan, Jian and Yu, Lu},
journal={IEEE Transactions on Emerging Topics in Computational Intelligence},
title={APNet: Adversarial Learning Assistance and Perceived Importance Fusion Network for All-Day RGB-T Salient Object Detection},
year={2021},
volume={},
number={},
pages={1-12},
doi={10.1109/TETCI.2021.3118043}}
The implement of this project is based on the code of ‘Cascaded Partial Decoder for Fast and Accurate Salient Object Detection, CVPR2019’and 'BBS-Net: RGB-D Salient Object Detection with a Bifurcated Backbone Strategy Network' proposed by Wu et al and Deng et al.
Please drop me an email for further problems or discussion: zzzyylink@gmail.com or wujiezhou@163.com