Add Grounded SAM2 Interactive Image Segmentation to Computer Vision #2
  Add this suggestion to a batch that can be applied as a single commit.
  This suggestion is invalid because no changes were made to the code.
  Suggestions cannot be applied while the pull request is closed.
  Suggestions cannot be applied while viewing a subset of changes.
  Only one suggestion per line can be applied in a batch.
  Add this suggestion to a batch that can be applied as a single commit.
  Applying suggestions on deleted lines is not supported.
  You must change the existing code in this line in order to create a valid suggestion.
  Outdated suggestions cannot be applied.
  This suggestion has been applied or marked resolved.
  Suggestions cannot be applied from pending reviews.
  Suggestions cannot be applied on multi-line comments.
  Suggestions cannot be applied while the pull request is queued to merge.
  Suggestion cannot be applied right now. Please check back later.
  
    
  
    
🎯 What I Did
Hey there! I've implemented Grounded SAM2 Image Segmentation for the computer vision section - a powerful interactive segmentation tool that can segment objects using different types of prompts.
Quick Overview
This adds a flexible image segmentation solution that works with three different prompt types:
The implementation is designed to be educational and practical, showing how modern segmentation models like SAM2 can be integrated into real workflows.
📂 What's Included
File Added:
computer_vision/grounded_sam2_segmentation.py(379 lines)Key Features:
🔧 Implementation Details
Class:
GroundedSAM2SegmenterMain Methods:
segment_with_points()- Point-based segmentationsegment_with_box()- Box-based segmentationsegment_with_text()- Text-grounded segmentationapply_color_mask()- Visualization helperEdge Cases Handled:
✅ Testing & Validation
Doctests: 31 tests, 0 failures ✨
$ python3 -m doctest computer_vision/grounded_sam2_segmentation.py -v ... 31 tests in 9 items. 31 passed and 0 failed. Test passed.Demonstration Output:
$ python3 computer_vision/grounded_sam2_segmentation.py ============================================================ Grounded SAM2 Segmentation Demonstration ============================================================ 1. Point-based segmentation Generated mask shape: (200, 200) Segmented pixels: 7245 2. Bounding box segmentation Generated mask shape: (200, 200) Segmented pixels: 8100 3. Text-grounded segmentation Detected objects: 1 Object 1: - Label: object in center - Confidence: 0.85 - BBox: (50, 50, 150, 150) - Mask pixels: 7845 4. Visualization Result image shape: (200, 200, 3)All functionality working perfectly! 🎉
📚 Technical Highlights
Design Principles:
Why This Matters:
📋 Contribution Checklist
Describe your change:
Requirements:
computer_vision/grounded_sam2_segmentation.pygrounded_sam2_segmentation.py✓GroundedSAM2Segmenter(PascalCase)segment_with_points,apply_color_mask(snake_case)mask_threshold,point_coords(snake_case)list[dict[str, Any]], etc.)🔗 References
🙏 Acknowledgments
Thanks to @NANDAGOPALNG for requesting this feature and to the maintainers for reviewing! This implementation provides a solid foundation for understanding modern interactive segmentation techniques.
Ready for review! Happy to make any adjustments. 😊
Closes TheAlgorithms#13516