Skip to content

Matte Anything: Interactive Natural Image Matting with Segment Anything Models

License

Notifications You must be signed in to change notification settings

neuromorph/Matte-Anything

 
 

Repository files navigation

Matte Anything!🐒

Interactive Natural Image Matting with Segment Anything Models

Authors: Jingfeng Yao, Xinggang Wang📧, Lang Ye, Wenyu Liu

Institute: School of EIC, HUST

(📧) corresponding author

arxiv paper video license authors

================================================================================

Updates in this fork:

  • Use text input for foreground objects (along with text settings), instead of selecting points in image
  • Change transparency settings to better tune for images with transparent objects

Refer Colab notebook to try out the demo in Google colab.

Samples using text input:

cat dog sofa boy dog man dog chair

Samples for transparency:

bulb alpha crystal alpha

Sample with full or no transparency values in setting: Bulb on wire Bulb on wire

==============================================================================

demo

📢 News

  • 2023/06/08 We release arxiv tech report!
  • 2023/06/08 We release source codes of Matte Anything!

The program is still in progress. You can try the early version first! Thanks for your attention. If you like Matte Anything, you may also like its previous foundation work ViTMatte.

📜 Introduction

We propose Matte Anything (MatAny), an interactive natural image matting model. It could produce high-quality alpha-matte with various simple hints. The key insight of MatAny is to generate pseudo trimap automatically with contour and transparency prediction. We leverage task-specific vision models to enhance the performance of natural image matting.

web_ui

🌞 Features

  • Matte Anything with Simple Interaction
  • High Quality Matting Results
  • Ability to Process Transparent Object

🎮 Quick Start

Try our Matte Anything with our web-ui!

web_ui

Quick Installation

Install Segment Anything Models as following:

pip install git+https://github.com/facebookresearch/segment-anything.git

Install ViTMatte as following:

python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
pip install -r requirements.txt

Install GroundingDINO as following:

cd Matte-Anything
git clone https://github.com/IDEA-Research/GroundingDINO.git
cd GroundingDINO
pip install -e .

Download pretrained models SAM_vit_h, ViTMatte_vit_b, and GroundingDINO-T. Put them in ./pretrained

Run our web-ui!

python matte_anything.py

How to use

  1. Upload the image and click on it (default: foreground point).
  2. Click Start!.
  3. Modify erode_kernel_size and dilate_kernel_size for a better trimap (optional).

🎬 Demo

matte_anything.mp4

Visualization of SAM and MatAny on real-world data from AM-2K and P3M-500 . web_ui Visualization of SAM and MatAny on Composition-1k web_ui

📋 Todo List

  • adjustable trimap generation
  • arxiv tech report
  • add example data
  • support user transparency correction
  • support text input
  • finetune ViTMatte for better performance

🤝Acknowledgement

Our repo is built upon Segment Anything, GroundingDINO, and ViTMatte. Thanks to their work.

Citation

@article{matte_anything,
  title={Matte Anything: Interactive Natural Image Matting with Segment Anything Models},
  author={Yao, Jingfeng and Wang, Xinggang and Ye, Lang and Liu, Wenyu},
  journal={arXiv preprint arXiv:2306.04121},
  year={2023}
}

About

Matte Anything: Interactive Natural Image Matting with Segment Anything Models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 87.4%
  • Jupyter Notebook 12.6%