-
Notifications
You must be signed in to change notification settings - Fork 220
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Segment Anything Model Integration #253
Conversation
Thank you for your work @shondle ! In https://www.trainyolo.com implementation, they use:
|
I can't wait to try this. This would bring this tool to a whole next level. :) |
Thanks for contribute to community~ We integrate it in OpenMMLab PlayGround , Welcome Star Support Point2Label and BBox2Label https://github.com/open-mmlab/playground/tree/main/label_anything |
fix: ENV variable AUTO_UPDATE can't set false (HumanSignal#256)
I add some information before, no problem, I will add github username in openmmlab playground~ by the way , would you like contribute code to openmmlab playground ? https://github.com/open-mmlab/playground/tree/main/label_anything |
Checking in on this — @KonstantinKorotaev, is there anything I can do to help get this across the line? 😊 |
+1 extremely interested in this |
For "Eraser" feature: In my opinion, it should be used in adding and erasing smart keypoints for a same target label item. |
Hi |
@shondle could you please check pytests and try to fix it in the new PR? Seems it was timeouted because a model tried to load too long. |
Could you add functionality to convert from bounding boxes to segmentations? This would be extremely useful for converting detection datasets to segmentation datasets for YOLO models |
Here I added the ability to create the mask with a smart rectangle label. For converting an existing dataset you would need to change the box input x, y, width, and height by gathering from the tasks (what is already annotated on the image) instead of the kwargs. Then, you could just select all of the images and send them through the model. Unsure about this, but may be better (for faster inference over a large set of images) to do this when I used the pytorch model this commit. instead of the ONNX (what I referenced earlier). |
I have tried to fix it, but running into issues with Docker (which I do not have much experience with). I fixed a few issues before this PR was merged, but was unable to figure out the fix for the timeout. If there is anyone with more knowledge working with Docker that is able to fix this issue and get over the hump, I would appreciate it. Otherwise, I will try to get back to this sometime later. |
Hi, I tried installing this and everything seemed to go smoothly until I try to get a prediction:
It fails when getting the context from
Any idea on what goes wrong? |
Did you add ML backend as interactive? |
Great! |
Please deactivate option "Show predictions to annotators in the Label Stream and Quick View" in your Machine Learning settings for your project and check one more time. |
the the ML backend http address seems not to work properly..any fix for the SAM |
I am also looking forward to this feature, especially in conjunction with this project https://github.com/facebookresearch/ov-seg demo: https://huggingface.co/spaces/facebook/ov-seg In this test example, vietanhdev/anylabeling#89 labeling other cells in the image can be manually corrected to the cells that need to be labeled and converted into instance segmentation, which greatly improves labeling efficiency. another project: So, Image annotation can be combined with text prompt |
Bumping this issue up. Seems like this is a persistent issue on my end as well. Is there a solution to this? |
Why is this problem not solved, it is 5-6 problems and still not solved! |
Hi, are you using the smart keypoint tool on the toolbar while selecting one of the bottom two labels provided in the labeling config in the README? Other things to check are whether Auto Annotation and Auto accept annotation are on in the image tab before you use the smart keypoint using one of the bottom labels to make a prediction. Make sure "Use for interactive preannotations" is toggled on when you are adding your model. In the Machine Learning tab, make sure only the bottom toggle is activated aka "Show predictions to annotators in the Label Stream and Quick View". |
Once it has the smart keypoint selected can you click off to the side of the screen to unselect? If that doesn't keep it selected then if you need to toggle through the options just keep clicking the purple box in the toolbar not the sidebar. |
Thanks |
This adds the ability to use Facebook's Segment Anything Model (SAM) with Label Studio.
Users can use a smart keypoint to generate a blush annotation with SAM for any object on an image, and adjust using tools already provided in Label Studio. This makes it much easier to create annotations quickly for any image segmentation use case.
For this, I created an ML back end that takes the image and smart KeyPoint a user places in Label Studio, and use it to generate the mask for the selected object with SAM. This is then converted to RLE and then passed to Label Studio to be generated as a blush annotation.
This has been tested using a computing cluster environment and with a CPU (after adjusting the device parameter) on a local machine.