New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The num of boxes of matching_gt_bbox is more than that of valid_gt? #39
Comments
@QingXIA233 |
Ok, got it, your method makes sense to me now. Thank you for the quick reply. I'll do it according to your method. Perhaps I'll also perform some experiments with label assignment similar to DETR later. |
HI,I am quite confuse about 'gt_cls = 0', I checked the code in 'get_matching_by_iou', it assign the first gt_box to the proposal who match none with all the gt_boxes and coresponding 'gt_cls=0'. Same as this way it did in other places suce as 'relabel_by_iou'. Question is, 'gt_cls=0' is`n vehicle in waymo dataset?(not background class) @Lzc6996 |
@qingzhouzhen |
If I transfer dataset to others such as nuscenes, will thie logic work probably? |
@qingzhouzhen Hi, in my understanding, in this repo, they start the class labels from 1 instead of 0, which means 0 becomes the label that is not considered. My own data' label starts from 0(Car), so I add 1 to each of them before I feed the data to LiDAR R-CNN, then I minus 1 when I run create_results.py after testing the model. |
Hello, sorry I come back with another question......
Recently, I've been working on using LiDAR R-CNN to refine the results of the CenterPoint-PP model with my own dataset. During data processing for my own dataset, I notice that the results of my CenterPoint-PP model has more bboxes detected than the ground truth ones (false detection case). When performing get_matching_by_iou function in LiDAR R-CNN, the obtained matching_gt_bbox has the same number of bboxes as the model predictions instead of the groundtruth data. I'm a bit confused about this process. Now that we are trying to do refinement, shouldn't we remove the falsely detected bboxes in the results and keep to the groundtruth? If so, why the matching bboxes is according to the predictions instead of groundtruth?
Maybe I have some misunderstandings here, it would be a great helper if you could give me some hints. Thanks in advance.
The text was updated successfully, but these errors were encountered: