-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple Points, Labels, and Boxes while Batch Prompting #111
Comments
@Jordan-Pierce Is this the same issue ? #115 |
Hi @zdhernandez, not quite. Thanks for the response though. |
Did you check the shape of your |
@HannaMao @Jordan-Pierce I have the same issue. I found that the number of points and boxes must be the same, otherwise error happens (8 boxes and 7 points):
|
@nikhilaravi @Jordan-Pierce im exactly looking the same. Have you solved this issue? my object detector is detecting two objects but my SAM model only detecting a mask on 1 object, how can we do it on multiple objects in single image? |
I guess at the moment you need to run decoder on every single object separately.
When its only 1 object everything works fine, but it returns complete mess when there are more than 1 object detected by the object detector. Please tell me I am wrong about it, and simply doing mistakes when inferencing SAM detector. |
Has
What should be used instead to perform batch prompting using points? |
From SAM github, Can I ask what B and N mean here in BxNx2 tensor size? Also, points and boxes share B as a common parameter. So, do I need the same number of points and boxes? Thank you. |
The first question: The second question: No, for each box you need a number of points (I think you need a fixed number of points for each box, it means you can't use 4 points for the first box and 5 points for the second box, but I'm not sure about that) |
That is correct, each box needs a sequence of point coords. |
https://github.com/ByungKwanLee/Full-Segment-Anything addresses the ciritical issues of SAM, which supports batch-input on the full-grid prompt (automatic mask generation) with post-processing: removing duplicated or small regions and holes, under flexible input image size |
Thanks @ByungKwanLee |
Here, I have met all the size requirements, still I get an error saying "too many indices for tensor of dimension 3" for the points. I have no idea how to figure the issue. if i use only the boxes, it runs fine though! Here, B = 4, N = 1 |
It's hard to say without seeing the error, but one possible issue is that the |
Thanks again for the release, very useful!
I'm currently trying to do batch prompting by providing bounding boxes (creating masks for objects already annotated), and I've noticed that sometimes the boxes alone do not create entire masks. An idea to solve this was to provide a bounding box, and multiple points sampled within that box, and hopefully together that would create a better mask for those edge cases.
The notebook provides a clear example of how to perform a prediction for a single point, and a single bounding box, as well as multiple bounding boxes, but not multiple points (w/ labels) and bounding boxes. When trying to do this, I keep running into an error and it's not clear if I'm doing it incorrectly, or if you cannot do this. Below is an example of what I thought would work:
I understand that I can get the results I'm looking for by breaking up the process one-by-one and just join the masks, but I'd like to know if there is a working solution for this approach. Thanks.
The text was updated successfully, but these errors were encountered: