Henghao Sun, Litao Zhua, Weibin Ma, Yucheng Mao and Wen Dai
Salient Object Detection (SOD) is a fundamental task in computer vision that aims to accurately identify and segment the most visually distinctive regions within complex backgrounds. However, the performance of existing SOD methods is limited in indoor scenarios characterized by dynamic lighting, severe background clutter, and low contrast between objects and their backgrounds. Recently, the Segment Anything Model (SAM) has emerged as a promising paradigm for mitigating these challenges, leveraging its exceptional zero-shot segmentation capabilities and robust generalization. Consequently, we propose a novel indoor salient object detection model, named SAM2-PFF (SAM2 with Progressive Feature Fusion).
git clone https://github.com/fearless0721/SAM-PFF.gitOur project does not depend on installing SAM2. If you have already configured an environment for SAM2, then directly using this environment should also be fine. You may also create a new conda environment:
conda create -n sam2-pff python=3.10
conda activate sam2-pff
pip install -r requirements.txtIf you want to train your own model, please download the pre-trained segment anything 2 from the official repository. You can also directly download sam2_hiera_large.pt from here. After the above preparations, you can run train.sh to start your training.
After obtaining the prediction maps, you can run eval.sh to get most of the quantitative results.
Please star this project if you use this repository in your research. Thank you!