OpenMMLab Detection Toolbox and Benchmark
-
Updated
Aug 21, 2024 - Python
OpenMMLab Detection Toolbox and Benchmark
Images to inference with no labeling (use foundation models to train supervised models).
Must-have resource for anyone who wants to experiment with and build on the OpenAI vision API 🔥
API for Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series
👁️ + 💬 + 🎧 = 🤖 Curated list of top foundation and multimodal models! [Paper + Code + Examples + Tutorials]
A tab for sd-webui for replacing objects in pictures or videos using detection prompt
GroundedSAM Base Model plugin for Autodistill
Grounding DINO module for use with Autodistill.
A simple demo for utilizing grounding dino and segment anything v2 models together
Synthetic dataset generation with Stable Diffusion and generating of segmentation mask using Grounding DINO and Segment Anythin Model
Combining three computer vision foundation models, Segment Anything Model (SAM), Stable Diffusion, and Grounding DINO, to edit and manipulate images.
This project explores the intersection of NLP and CV, showcasing the potential of leveraging three powerful models – SAM, Stable Diffusion, and Grounding DINO – to edit manipulate images through textual commands.
Autodectify: Detect and Export Objects with Zero-Shot Object Detection Models
Automatic data labeling tool, with flexible search patterns, using the power of GroundingDINO (detect anything with language).
A minimalistic webapp to perform zero shot object detection based on textual prompts using GroundingDINO
Add a description, image, and links to the grounding-dino topic page so that developers can more easily learn about it.
To associate your repository with the grounding-dino topic, visit your repo's landing page and select "manage topics."