-
Notifications
You must be signed in to change notification settings - Fork 149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SAM Clip Error #57
Comments
Thank you for creating this Issue and apologies for the delay in responding. Can I confirm the image with which you are working is a valid JPG image? If an image is corrupted, it might now load, causing |
We have replaced the autodistill-sam-clip module with the new model combination API. See the autodistill-sam-clip README for instructions on how to use the new module. Let me know if you run into the same issue with the new configuration. |
Hello
I do not see any new codes uploaded on the site.
Can
…On Tue, Nov 21, 2023 at 4:26 AM James ***@***.***> wrote:
We have replaced the autodistill-sam-clip module with the new model
combination API. See the autodistill-sam-clip
<https://github.com/autodistill/autodistill-sam-clip> README for
instructions on how to use the new module. Let me know if you run into the
same issue with the new configuration.
—
Reply to this email directly, view it on GitHub
<#57 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AA2TTM3GOQLFB3WDPBBMIPDYFSMY5AVCNFSM6AAAAAA5XC4EZKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMRQHAZTAMJTG4>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Please refer to the SAM-CLIP README. Here is an example of the new API: from autodistill_clip import CLIP
from autodistill.detection import CaptionOntology
from autodistill_grounded_sam import GroundedSAM
import supervision as sv
from autodistill.core.custom_detection_model import CustomDetectionModel
import cv2
classes = ["McDonalds", "Burger King"]
SAMCLIP = CustomDetectionModel(
detection_model=GroundedSAM(
CaptionOntology({"logo": "logo"})
),
classification_model=CLIP(
CaptionOntology({k: k for k in classes})
)
)
IMAGE = "logo.jpg"
results = SAMCLIP.predict(IMAGE)
image = cv2.imread(IMAGE)
annotator = sv.MaskAnnotator()
label_annotator = sv.LabelAnnotator()
labels = [
f"{classes[class_id]} {confidence:0.2f}"
for _, _, confidence, class_id, _ in results
]
annotated_frame = annotator.annotate(
scene=image.copy(), detections=results
)
annotated_frame = label_annotator.annotate(
scene=annotated_frame, labels=labels, detections=results
)
sv.plot_image(annotated_frame, size=(8, 8)) |
Hello
The codes are still six months old on GitHub.
Csv
…On Fri, Dec 1, 2023 at 9:31 AM James ***@***.***> wrote:
Please refer to the SAM-CLIP README
<https://github.com/autodistill/autodistill-sam-clip>. Here is an example
of the new API:
from autodistill_clip import CLIPfrom autodistill.detection import CaptionOntologyfrom autodistill_grounded_sam import GroundedSAMimport supervision as sv
from autodistill.core.custom_detection_model import CustomDetectionModelimport cv2
classes = ["McDonalds", "Burger King"]
SAMCLIP = CustomDetectionModel(
detection_model=GroundedSAM(
CaptionOntology({"logo": "logo"})
),
classification_model=CLIP(
CaptionOntology({k: k for k in classes})
)
)
IMAGE = "logo.jpg"
results = SAMCLIP.predict(IMAGE)
image = cv2.imread(IMAGE)
annotator = sv.MaskAnnotator()label_annotator = sv.LabelAnnotator()
labels = [
f"{classes[class_id]} {confidence:0.2f}"
for _, _, confidence, class_id, _ in results
]
annotated_frame = annotator.annotate(
scene=image.copy(), detections=results
)annotated_frame = label_annotator.annotate(
scene=annotated_frame, labels=labels, detections=results
)
sv.plot_image(annotated_frame, size=(8, 8))
—
Reply to this email directly, view it on GitHub
<#57 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AA2TTM574TZS4I6ZCYBGHZTYHIH6XAVCNFSM6AAAAAA5XC4EZKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMZWGUYDONJUGM>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
I know. The
We replaced it with |
The new I know it is confusing to have two things with the same name, but the implementations are different. Try the code I showed above (make sure you run |
Hello James
Thanks for the update. It is bit confusing at this time. I will dig it and
get back to you.
Csv
…On Fri, Dec 1, 2023 at 9:51 AM James ***@***.***> wrote:
I know. The autodistill-sam-clip model is officially deprecated. The code
in that repository is no longer used, and will not be updated. This is
noted in the README:
This model has been replaced with the SAM-CLIP combination implemented
with the Autodistill model combination API. This API enables you to combine
using a detection and classification model for auto-labeling. See the code
snippet below for an example of using SAM-CLIP with the new API.
We replaced it with CustomDetectionModel, which combines models. This new
API is available by combining different models (in my code above,
GroundedSAM, which uses SAM, and CLIP).
—
Reply to this email directly, view it on GitHub
<#57 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AA2TTM4PYPGPMIGYE7HY6ZTYHIKKTAVCNFSM6AAAAAA5XC4EZKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMZWGUZTMMRZGM>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
@csverma610 Have you solved the problem? If so, can we close this issue? |
I am going to close this issue, but we can re-open it if further assistance is required. |
Hello,
On Several images in the dataset. I get the following error. Is it the issue of error handling by the code?
Labeling People/beautiful-serene-black-woman-reflection-1296x728-header.jpg: 6%|▏ | 7/114 [00:39<09:57, 5.58s/it]
Traceback (most recent call last):
File "/media/csverma/M2Disk/Projects/CompVis/ObjectDetection/AutoDistill/SAMClip_labels.py", line 10, in
base_model.label(input_folder=folder_name, output_folder="mldata")
File "/media/csverma/M2Disk/Projects/CompVis/ObjectDetection/AutoDistill/autodistillenv/lib/python3.11/site-packages/autodistill/detection/detection_base_model.py", line 44, in label
detections = self.predict(f_path)
^^^^^^^^^^^^^^^^^^^^
File "/media/csverma/M2Disk/Projects/CompVis/ObjectDetection/AutoDistill/autodistillenv/lib/python3.11/site-packages/autodistill_sam_clip/sam_clip.py", line 162, in predict
nms = sv.non_max_suppression(np.array(nms_data), 0.5)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/media/csverma/M2Disk/Projects/CompVis/ObjectDetection/AutoDistill/autodistillenv/lib/python3.11/site-packages/supervision/detection/utils.py", line 84, in non_max_suppression
rows, columns = predictions.shape
^^^^^^^^^^^^^
ValueError: not enough values to unpack (expected 2, got 1)
(autodistillenv) %$ open People/beautiful-serene-black-woman-reflection-1296x728-header.jpg
The text was updated successfully, but these errors were encountered: