Skip to content

ChaoningZhang/Awesome-segment-anything-model

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 

Repository files navigation

Segment Anything

Awesome-segment-anything-model :octocat:

Original paper link:https://ai.facebook.com/research/publications/segment-anything/

Original repo link:https://github.com/facebookresearch/segment-anything

Related Papers

This paper presents a framework for utilizing the Segment Anything Model (SAM) to generate pseudo labels for pretraining thermal infrared image segmentation tasks, as well as a large scale thermal infrared segmentation dataset. This approach is an effective solution to work with large models in special fields where label annotation is challenging, and has been demonstrated to improve the accuracy of segmentation results beyond the SOTA ImageNet pretrained model.

This paper presents the first attempt of using GPT-4 to generate multimodal language-image instruction-following data. This data is used to create the Large Language and Vision Assistant (LLaVA), an end-to-end trained large multimodal model that connects a vision encoder and LLM for general-purpose visual and language understanding. Early experiments show that LLaVA exhibits impressive chat abilities and yields a 85.1% relative score compared to GPT-4 on a synthetic multimodal instruction-following dataset. When fine-tuned on Science QA, the synergy of LLaVA and GPT-4 achieves a new state-of-the-art accuracy of 92.53%.

This paper investigates the capability of the Segment Anything Model (SAM) for medical image analysis, specifically for multi-phase liver tumor segmentation (MPLiTS). Experiments demonstrate that there is a gap between SAM and expected performance, however, the qualitative results show that SAM is a powerful annotation tool for interactive medical image segmentation.

This report evaluates the performance of the recently released Meta AI Research Segment Anything Model (SAM) in polyp segmentation under unprompted settings. Results of the evaluation are available publicly at https://github.com/taozh2017/SAMPolyp and could provide insights to advance the polyp segmentation field and inspire further research.

This study proposes SAM-Adaptor, an image segmentation network incorporating domain-specific information or visual prompts into the large pre-trained model Segment Anything (SAM). Experimental findings show that SAM-Adaptor can significantly elevate SAM's performance in challenging tasks, such as shadow detection and camouflaged object detection, and even achieve state-of-the-art performance. It has potential applications in various fields, including medical image processing, agriculture, and remote sensing.

This paper presents Inpaint Anything (IA), a mask-free image inpainting system based on Segment-Anything Model (SAM). IA has three features: (I) Remove Anything; (ii) Fill Anything with text-based prompts; and (iii) Replace Anything.

This paper compares the Segment Anything Model (SAM) with FSL's Brain Extraction Tool (BET) for brain extraction on different brain scans. Results show that SAM outperforms BET in various evaluation parameters, especially with signal inhomogeneities, non-isotropic voxel resolutions, or lesions near the brain's outer regions and meninges. SAM's superior performance suggests its potential as a more accurate, robust, and versatile tool for brain extraction and segmentation applications.

Segment Anything

In this report, three concealed scenes (camouflaged animals, industrial defects, and medical lesions) have been tested to evaluate SAM's performance under unprompted settings. The main observation is that SAM struggles to accurately identify objects in these scenes.

Segment Anything

This work investigates the performance of the Segment Anything Model (SAM) pre-trained on SA-1B across various applications, such as natural images, agriculture, manufacturing, remote sensing, and healthcare. The benefits and limitations of SAM are analyzed and discussed, with an outlook on future development of segmentation tasks. This provides a comprehensive view of SAM in practice, which will facilitate future research activities.

Segment Anything

This paper introduces the Segment Any Medical Model (SAMM), a 3D Slicer extension of the Segment Anything Model (SAM) for medical image segmentation. SAMM has demonstrated good promptability and generalizability and can infer masks in nearly real-time with 0.6-second latency. The open-source SAMM and its demonstrations are available on GitHub.

Segment Anything

This paper investigates how well SAM performs in the task of Camouflaged Object Detection (COD) and compares SAM's performance to 22 state-of-the-art COD methods.

Segment Anything

This paper proposes the use of computer vision macromodels (SAMs) to guide semi-automated annotation of data in the domain of specific object detection, and the High Fine Grain Fill-in Augmentation (HFGFA) method for visual image data augmentation. These approaches have been shown to improve model generalisation and open world object detection capabilities.

Segment Anything

This study evaluates the SAM model's performance on whole slide imaging (WSI) tasks such as tumor segmentation, non-tumor tissue segmentation, and cell nuclei segmentation. The results suggest that the model performs well for large objects, but does not consistently perform well for dense instance object segmentation. Identified limitations for digital pathology include image resolution, multiple scales, prompt selection, and model fine-tuning. Future work should explore few-shot fine-tuning with images from downstream pathological segmentation tasks to improve performance.

Segment Anything

Related Projects

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published