-
Notifications
You must be signed in to change notification settings - Fork 5.5k
Satellite Imagery(tiny objects) Generalization #51
Comments
yes, u r right.. I think so |
the same issue occurs to me. |
sure , I'll let you know if I manage fine tune SAM on this particular aerial Imagery dataset(xview1) that I'v used and share the results. |
After reading the paper I get the impression that this is one of the limitations.
It's a promptable model, and a 'segment everything' probably works on a generated grid of prompt points - if the point misses the part of a fine structure (like a plane) - it will be missed. What you might try is to do something similar as authors did when preparing the dataset: Another approach would be to generate prompt points not from the grid, but somehow informed by the image - let's say using a sensitive edge detector - one can sample prompt points s.t. each connected component has at least one point sampled from it. |
@kretes : Thank you , this was very insightful! Image: |
Hi @Radhika-Keni , thanks for your post. I'm looking for the "segment everything" option that you mentioned, and can't find it. |
Hi. I think the SAM can not segment satellite images precisely. |
Would love to know the fine-tuning workflow too. |
The resulting segments are a list of dictionaries. Does anyone knows how to transform that into a georeferenced image? |
The following repo might be helpful: https://github.com/aliaksandr960/segment-anything-eo/blob/main/README.md |
I just released the segment-geospatial Python package, making it easier to segment satellite imagery and export results in various vector format. segment.mp4 |
Hi @Khoo Yong Yong ,
In response to your question ,I did not run an inference on their code directly therefore I do
not know what the corresponding parameter for 'segment everything' option
would be when running their inference method directly through their code
base API.
I used only the graphical API(UI) to run inferences .
I did notice that a couple of people have asked the same question
that you have asked on their 'issues' tab , you may want to do a search to
see if the either the SAM team has replied to them or they have figured it
out themselves !
…On Tue, Apr 25, 2023 at 12:45 PM Khoo Yong Yong ***@***.***> wrote:
Hope this helps [image: image]
<https://user-images.githubusercontent.com/68383273/230651606-67a0e904-4fad-4b0d-bbea-7373964d07d8.png>
Hi @Radhika-Keni <https://github.com/Radhika-Keni>, i would like to know,
the exact parameter for the SamAutomaticMaskGenerator for this 'SAM
Everything' option. Reason of asking is, when i try it on my local (default
param, or with the sample code), the result is different.
The one on the web is way much better than mine.
—
Reply to this email directly, view it on GitHub
<#51 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AQJXEKIFD5E3CGD4FEIS633XC52ZTANCNFSM6AAAAAAWVPTGKA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@Radhika-Keni
At the same time, I am also interested if you have found a way to fine-tune or enhance the result. |
@yong2khoo-lm : I have decided not to try to fine tune SAM . Here' s why : SAM stands for "Segment Anything ", the reason this is revolutionary is because it claims to be able to segment any image and therefore if we have to 'fine tune' it on a dataset to get the results we expect , then are we not defeating its very purpose ? |
@Havi-muro : Thanks much for sharing this ! I would love to try this out on my dataset . Could you please share how much denser you made their original grid before you ran the inference ? |
There are 10.000 prompting points and the image is 4096x4096 pixels of 3 m each. I think that the default is 32 points per side, and 64 points per batch (I'm unsure about what this last one means). So, I think I multiplied the density by 10 (For no particular reason, those are just the training points I have for a prediction model) |
Gotchya !! Thanks much for sharing , appreciate it !! |
follow |
Thank you for the incredible work & congratulations!
SAM does not seem to generalize as well on satellite imagery(tiny objects). This was the result of the "segment everything" option on the image .
However SAM works better on the same image if I manually prompt the model to the object of interest (such as the tiny aircrafts on the LHS corner) that it may have missed in the "segment everything" option.
Couple of more examples with the "segment everything" option:
Any insights on the same would be most helpful !
The text was updated successfully, but these errors were encountered: