Skip to content

wenxi-yue/SurgicalPart-SAM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 

Repository files navigation

SurgicalPart-SAM: Part-to-Whole Collaborative Prompting for Surgical Instrument Segmentation

Wenxi Yue, Jing Zhang, Kun Hu, Qiuxia Wu, Zongyuan Ge, Yong Xia, Jiebo Luo, Zhiyong Wang

News | Abstract | Results | Installation | Data | Checkpoints | Train | Inference

News

2023.12.22 - The tech report is posted on arxiv. Work in progress.

Abstract

The Segment Anything Model (SAM) exhibits promise in generic object segmentation and offers potential for various applications. Existing methods have applied SAM to surgical instrument segmentation (SIS) by tuning SAM-based frameworks with surgical data. However, they fall short in two crucial aspects: (1) Straightforward model tuning with instrument masks treats each instrument as a single entity, neglecting their complex structures and fine-grained details; and (2) Instrument category-based prompts are not flexible and informative enough to describe instrument structures. To address these problems, in this paper, we investigate text promptable SIS and propose SurgicalPart-SAM (SP-SAM), a novel SAM efficient-tuning approach that explicitly integrates instrument structure knowledge with SAM's generic knowledge, guided by expert knowledge on instrument part compositions. Specifically, we achieve this by proposing (1) Collaborative Prompts that describe instrument structures via collaborating category-level and part-level texts; (2) Cross-Modal Prompt Encoder that encodes text prompts jointly with visual embeddings into discriminative part-level representations; and (3) Part-to-Whole Adaptive Fusion and Hierarchical Decoding that adaptively fuse the part-level representations into a whole for accurate instrument segmentation in surgical scenarios. Built upon them, SP-SAM acquires a better capability to comprehend surgical instruments in terms of both overall structure and part-level details. Extensive experiments on both the EndoVis2018 and EndoVis2017 datasets demonstrate SP-SAM's state-of-the-art performance with minimal tunable parameters.

Figure 1: Overview of SurgicalPart-SAM.

Results

Image Description

Image Description

Figure 2: Visualisation Results of SurgicalPart-SAM.

Citing SurgicalPart-SAM

If you find SurgicalPart-SAM helpful, please consider citing:

@misc{yue2024surgicalpartsam,
      title={SurgicalPart-SAM: Part-to-Whole Collaborative Prompting for Surgical Instrument Segmentation}, 
      author={Wenxi Yue and Jing Zhang and Kun Hu and Qiuxia Wu and Zongyuan Ge and Yong Xia and Jiebo Luo and Zhiyong Wang},
      year={2024},
      eprint={2312.14481},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

About

Official implementation of SurgicalPart-SAM (SP-SAM)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published