Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

questions with respect to the negative samples. #4

Open
d12306 opened this issue Jan 12, 2020 · 3 comments
Open

questions with respect to the negative samples. #4

d12306 opened this issue Jan 12, 2020 · 3 comments

Comments

@d12306
Copy link

d12306 commented Jan 12, 2020

Hi, @chengchunhsu , thanks for your implementation, Actually, I have a concern for computing the MIL loss for the negative samples. As said in the original paper, the negative samples are samples with its number equal to that of the positive samples. However, in the code implementation, there is no such balance mechanism,

also, I am concerned about the way that the negative samples are sampled. It seems like they are sampled from the negative proposals who have a low IOU with the ground truth bbox, don't some of the proposals have a higher overlap with the pixels inside the bbox (positive samples)?

Thanks,

@d12306
Copy link
Author

d12306 commented Jan 12, 2020

also, I found a serious problem in the code, that there seem to be no negative samples involved in the MIL training stage, since we already did the bounding boxes suppression in the region proposal network training stage and you add a "keep_only_positive_boxes" function to filter out the negative predictions (which means the IOU intersection with gt bbox is larger before the bounding boxes are fed into the mask head), then when you wanna separate the positive and negative samples during MIL training through IOU with ground truth bounding boxes, then it will not work, and I experimentally test the code on COCO2017, it seems like all the proposals are positive.

Could you please check on this issue? @chengchunhsu

@chengchunhsu
Copy link
Owner

chengchunhsu commented Jan 14, 2020

Hi d12306,

Thank you for asking.
Here are some implementation details about the sampling part within the released code.

First, the function "keep_only_positive_boxes" does filter out the negative proposals.
All the following processes, i.e., detection and segmentation, involve only positive proposals.

Next, we sampled positive and negative bags from the positive proposals.
Noted that some crossing lines of positive proposals have no overlapping with ground-truth boxes and those crossing lines can be sampled as negative bags.
You can check the bag labels by yourself.

Finally, we do not limit the sampling ratio of the number of produced positive and negative bags in the released code. The ratio seems not to affect the performance so much.

Please let me know if you have any further questions.

Best,
Cheng-Chun

@d12306
Copy link
Author

d12306 commented Jan 15, 2020

@chengchunhsu , thank you so much for answering.
But I am still confused why this line of code can function,

            # generate label of positive sample
            pos_labels = []
            for mask in pos_masks_per_image:
                label_col = [torch.any(mask[col, :] > 0) for col in range(mask.size(0))]
                label_row = [torch.any(mask[:, row] > 0) for row in range(mask.size(1))]
                label = torch.stack(label_col + label_row)
                pos_labels.append(label)
            pos_labels = torch.stack(pos_labels).float()
            labels_per_image[pos_inds] = pos_labels

So the function
def project_boxes_on_boxes(matched_bboxes, proposals, discretization_size):
can separate the non-intersected region and set its value to 0?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants