-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to detect objects smaller than the min bbox size of 32px? #202
Comments
The smallest anchor is size 32px, so we can detect objects with IOU > 0.5 to a 32x32px box, which means sqrt(0.53232) = 22.6 pixels, so about 23x23px (in the case of a square box, fully aligned with the anchor box). Detecting smaller objects will probably be very challenging. Of the options you suggested, (1) is probably the best. (2) should work in theory, but it introduces a lot of new anchors, all of which are really hard to train (because small objects are challenging). This might be hard to train (as you observed), but could be ok if you decrease the LR and the parameter gamma. |
I'm going to go ahead and close this actually, since it is not an issue. Please use the keras team slack (see readme for instructions) for further questions & discussion. |
Thanks for the feedback. I'll put questions there. And if anyone comes across this, I've found also that (1) is the best option. |
Hi, I know this issue is closed now, but wanted to share my solution, as it might help others. Rather than scaling up the images, which increases memory usage and train time, I actually just forced my bboxes to be a minimum size (I found 29x29px seems to get roughly best results). This probably only works because my small objects are rare / quite sparsely distributed across my images. Also, my downstream processing doesn't care that the border around my identified object is slightly larger than that typically produced by this model. Might not work with densely packed objects because of non-max suppression, or multi objects contributing their features... One thing to remember here is that scaling up the bboxes as part of a script like I did, rather than by manual annotation means that they sometimes go off the edge of the image. I just moved the bbox up or down accordingly so that the min > 0 and max < img dimension (len of x or y). Cheers, T. |
P.s. ...this also means you don't have to scale up your test images!!! |
I was trying to detect a rare small object, and adjusting the anchor generation parameters, particularly the scale ratios helped during training. |
This is not an issue sorry, just a question. How would you go about detecting objects that are smaller than the smallest anchor size of 32px? For example cars in satellite images or seals on a rock. It's a difficult question for me because this repo and the paper are set up to ignore the high resolution feature layer C2 and have a minimum bbox size of 32 pixels.
I'm thinking:
The text was updated successfully, but these errors were encountered: