-
Notifications
You must be signed in to change notification settings - Fork 45.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Object detection : hard negatives. #2544
Comments
Depends on what network you are using, I guess. For example, Faster R-CNN considers as negative boxes those which have IoU under a certain threshold (0.3 by default, I think). And then it takes as many negative examples as positive ones (this is configurable, I think). To be more certain, I guess you could use object classification (detecting both A and B) and then only keep the objects A. |
@OverFlow7 You can already do this as @cipri-tom mentioned. Can you specify the model you are working with ? |
Hello thank you for your answers. I am using Faster RCNN with inception v2. What cipri-tom mentioned is indeed a good idea that i will be trying, but it still require either for the hard negatives to be in the same picture as the object i'am trying to detect, or hand labelling the hard negatives . |
Hello, I guess I have a similar/same problem. Using the InceptionV2-SSD model, and the model gives me great precision and recall on test set that contains at least one of the classes that I am interested in. But as soon as I add some images which do not contain any of my classes, I start getting false positives. Tried training with these images and provided an empty array in place of the truth data boxes. This did not seem to help. It would be nice to get answers to two questions (1) Is this an existing problem in the code, or am I doing something wrong here. (2) Is there a recommended solution other than adding another classifier? Would something like modifying the loss functions to take these examples into account work? Thanks! |
@niyazpk Sorry, I should have been more specific. You need to set min_negatives_per_image to a non zero number in the config for the model to sample boxes from purely negative images: https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/ssd_inception_v2_coco.config#L118 . Few 10s might be a good number to choose. @OverFlow7 , You can have purely negative images and faster_rcnn models will sample from anchors from them. We use sampling in both stages with a certain ratio of positives to negatives. If the sampler can't achieve that ratio ( in purely negative images), it fills the batch with negatives. See https://github.com/tensorflow/models/blob/master/research/object_detection/core/balanced_positive_negative_sampler.py#L18 |
I feel this discussion might benefit others if do it on stackoverflow. Can you please move the discussion there if my response does not sufficiently answer your question alreadt. I will close this issue for now. |
@tombstone Thank you! This was helpful. |
Thank you @tombstone . But how do you create tfrecords with purely negative images? what do you put in the .xml? |
@tombstone |
@tombstone |
Hello Everyone, I am new with Faster-RCNN program and I am trying to extract negative example from my training dataset (like getting the bounding box of the regions with label == 0 i.e. no object), I am reading the code but I am still confused about how to do this. Is it possible to extract negative example (a part of an image) from the training dataset? Any help would be appreciated. Thank you!!! |
@Kuldeep-Attri what do you mean extract ? Please use StackOverflow to ask questions (and give good examples if you want answers 😃 )! |
@cipri-tom I am sorry if I was not clear, what I mean by extract is that can we save the negative examples as an image(like .jpg file) after knowing the bbox of the negative examples and the original image? I want to store all background images in a folder. Thank you!!! |
Hello, I am also getting the similar error while Using the "ssd_mobilenet_v1_pets" model, and the model gives me good accuracy on positive example, but as soon as I add some images which do not contain any of my classes, I start getting false positives( it gives face as thumb with .99 accuracy). |
@cipri-tom @tombstone @OverFlow7 - Please help us resolve this issue, as we have made several attempts for the past 10 days, pls let us know, where we are failing, pls find the link below: We are attempting to detect people with out helmet in motor bikes. |
I was able to train the ssd_mobilenet_v1_coco model with quite good accuracy. It successfully detects and classifies 5 different classes and does not have any issues with negative examples (no false positives with threshold value 0.5). I used VoTT tool to prepare Pascal VOC dataset for training. <?xml version="1.0"?>
<annotation verified="yes">
<folder>Annotation</folder>
<filename>20171203_211505_10730</filename>
<path>C:\Tensorflow\models\research\object_detection\VOCdevkit\VOC2012\JPEGImages\20171203_211505_10730.jpg</path>
<source>
<database>Unknown</database>
</source>
<size>
<width>768</width>
<height>432</height>
<depth>3</depth>
</size>
<segmented>0</segmented>
</annotation> Hope this helps! |
@KKatya Did you get that XML & image data into the TFRecord format for training? |
@joskaaaa Yes, i used create_pascal_tf_record.py to create TFRecord files for training and evaluation. |
@KKatya can you please share your hard_example_miner section of pipeline.config? I am a bit confused about using negative images in training because Zhichao Lu from Google recently said the following: "For your negative images, is it true that you have zero groundtruth boxes? We currently don't support explicit negative image in training so adding those images don't help at all." This comment relates to training an SSD object detector: https://stackoverflow.com/questions/52696408/tensorflow-object-detection-api-detecting-the-humans-not-wearing-helmet/52698252 Anyways if you got good results by adding the negative images to your training set that sounds awesome... I'd love to know how many negative vs positive images you are using? |
@in-skyadav According to explanation in HardExampleMiner during loss calculation:
In your case when all input samples contain target class, just keep min_negatives_per_image 0, unless you want to add some pure background samples without any foreground target. If in that case, set a 10s number is enough. |
Are you sure that this script created TFRecords that contain the negative images? i used a similar script that creates TFRecords from .CSV files that is created by extracting data from XML files -I used verify feature in LabelIMG to create XMLs for negative images- and it created the TFRecord successfully but it had only the data of the images with labels, it didn't even see the empty images. |
Hi, I am facing similar issue. When i set min_negatives_per_image to 0 and not set any negative images, i get prediction correct for positive images but getting a lot of false positives. When i add a bunch of negative images say 50% positive and 50% negative and set min_negatives_per_image to 10, the results are not good. I am getting more false negatives as well as false positives. Anyone could help me on this issue... I am using ssd mobilenet v2 or finetuning |
What ratio we should follow to set |
I am facing same iisue I have to create tfrecord file with an Image file without any label. Can anyone help me in this? it will be very grateful. |
Hello!
I am currently using object detection on my own dataset, for some of my classes, i have a lot of false positives with high scores (>0.99, so having a higher score threshold won't help).
I know there is already some hard negative mining implemented ,but would it be possible to have a feature where one could add hard negatives examples to the training ?
Let's say we want to detect object A, and we know that object A look a lot like object B, but we are not interested in detecting, object B , in that case we could add images of object B to training (without any bounding boxes) in order for the network to distinguish between A and B.
The text was updated successfully, but these errors were encountered: