Skip to content

tianrun-chen/SAM-Adapter-PyTorch

Repository files navigation

SAM-adapter: Adapting SAM in Underperformed Scenes

Tianrun Chen, Lanyun Zhu, Chaotao Ding, Runlong Cao, Yan Wang, Shangzhan Zhang, Zejian Li, Lingyun Sun, Papa Mao, Ying Zang

KOKONI, Moxin Technology (Huzhou) Co., LTD , Zhejiang University, Singapore University of Technology and Design, Huzhou University, Beihang University.

In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 3367-3375).

Update on 30 August: This paper will be prsented at ICCV 2023.

Update on 28 April: We tested the performance of polyp segmentation to show our approach can also work on medical datasets. Update on 22 April: We report our SOTA result based on ViT-H version of SAM (use demo.yaml). We have also uploaded the yaml config for ViT-L and ViT-B version of SAM, suitable GPU with smaller memory (e.g. NVIDIA Tesla V-100), although they may compromise on accuracy.

Environment

This code was implemented with Python 3.8 and PyTorch 1.13.0. You can install all the requirements via:

pip install -r requirements.txt

Quick Start

  1. Download the dataset and put it in ./load.
  2. Download the pre-trained SAM(Segment Anything) and put it in ./pretrained.
  3. Training:
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nnodes 1 --nproc_per_node 4 loadddptrain.py --config configs/demo.yaml

!Please note that the SAM model consume much memory. We use 4 x A100 graphics card for training. If you encounter the memory issue, please try to use graphics cards with larger memory!

  1. Evaluation:
python test.py --config [CONFIG_PATH] --model [MODEL_PATH]

Train

CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch train.py --nnodes 1 --nproc_per_node 4 --config [CONFIG_PATH]

Updates on 30 July. As mentioned by @YunyaGaoTree in issue #39 You can also try to use the code below to gain (probably) faster training.

!torchrun train.py --config configs/demo.yaml
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nnodes 1 --nproc_per_node 4 loadddptrain.py --config configs/demo.yaml

Test

python test.py --config [CONFIG_PATH] --model [MODEL_PATH]

Pre-trained Models

https://drive.google.com/file/d/1MMUytUHkAQvMRFNhcDyyDlEx_jWmXBkf/view?usp=sharing

Dataset

Camouflaged Object Detection

Shadow Detection

Polyp Segmentation - Medical Applications

Citation

If you find our work useful in your research, please consider citing:

@misc{chen2023sam,
      title={SAM Fails to Segment Anything? -- SAM-Adapter: Adapting SAM in Underperformed Scenes: Camouflage, Shadow, and More}, 
      author={Tianrun Chen and Lanyun Zhu and Chaotao Ding and Runlong Cao and Shangzhan Zhang and Yan Wang and Zejian Li and Lingyun Sun and Papa Mao and Ying Zang},
      year={2023},
      eprint={2304.09148},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Acknowledgements

The part of the code is derived from Explicit Visual Prompt by Weihuang Liu, Xi Shen, Chi-Man Pun, and Xiaodong Cun by University of Macau and Tencent AI Lab.

About

Adapting Meta AI's Segment Anything to Downstream Tasks with Adapters and Prompts

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages