AI-powered image segmentation application using Meta's SAM2 (Segment Anything Model 2) for automatic segmentation and various classical and AI-based matting algorithms.
- Automatic Segmentation:
- Uses Meta's SAM2 model to automatically find and segment all objects in an image.
- Adjustable parameters:
points_per_side
,pred_iou_thresh
,stability_score_thresh
.
- Matte Generation:
- Generates high-quality alpha mattes from an image and a mask.
- Supports multiple matting algorithms:
- ViTMatte: A vision transformers based approach.
- Classical & Learning-Based Algorithms (via
pymatting
):cf
: Closed-Form Mattingknn
: KNN Mattinglbdm
: Learning-Based Digital Mattinglkm
: Large Kernel Matting
- Adjustable parameters:
erosion_kernel_size
,dilation_kernel_size
,max_size
.
- Web Interface:
- Built with React and PIXI.js for interactive visualization.
- Allows users to upload images, view segmentation masks, and download results.
- Docker and Docker Compose
- Git
- Node.js and pnpm (for local development)
-
Clone the repository:
git clone https://github.com/sandmor/segmenter cd segmenter
-
Set up environment variables: Copy the example
.env
file and customize it as needed.cp .env.example .env
-
Download Models: This will download the default
tiny
SAM2 model../download_models.sh
You can also specify a model variant:
tiny
,small
,base_plus
,large
../download_models.sh large
-
Install dependencies:
pnpm install:all
-
Start development servers:
pnpm run dev
This will start the backend on port 8000 and the frontend on port 5173.
After cloning the repository, setting up your .env
file, and downloading the models, you can run the application using Docker Compose.
-
Build and run the container:
docker-compose up --build -d
-
Access the application: Open your browser and navigate to
http://localhost
. The port can be changed in your.env
file. -
Follow the logs:
docker-compose logs -f
-
Stop the application:
docker-compose down
The recommended deployment method is using the provided script, which relies on Docker Compose.
-
Run the deployment script: For local deployment:
./deploy.sh
For production with a custom domain:
./deploy.sh your-domain.com
-
Access the application:
- Local:
http://localhost
- Production:
http://your-domain.com
- Local:
To enable GPU acceleration for the model, you need to have the NVIDIA Container Toolkit installed on your system.
Once you have the toolkit installed, you can enable GPU support by uncommenting the deploy
section in docker-compose.yml
and setting DEVICE=cuda
in your .env
file.
Create a .env
file to configure the application. See .env.example
for available options.
SAM2_MODEL
:tiny
,small
,base_plus
,large
(default:tiny
)DEVICE
:cpu
,cuda
,mps
(default:cpu
)PORT
: Server port (default: 8000)CORS_ORIGINS
: Comma-separated list of allowed origins.