-
Notifications
You must be signed in to change notification settings - Fork 2.3k
-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ValueError: bg_num_rois = 0 and fg_num_rois = 0, this should not happen! #111
Comments
I should also mention that the repo is up-to-date, I saw the commit related to the same error but it didn't solve the problem. |
Your last commit didn't solve the problem. I've already said that my repo is up-to-date. |
I have same issue on pascal_voc_0712 data set (at 5000 iter of epoch 1) after I try to implement R-FCN with this repo. Any one know what is a possible reason? |
Hi, @WillSuen are you using the most recent code? |
@jwyang Yes, I just tried with the most recent code, but still got the same issue. The training process goes well, with loss decreasing each iter, and then at 5000 iter comes this |
I have changed several parameters in cfg,
Will these parameters be the reason? I'll change them back to see what happens |
I didn't change any parameter but got the error. |
It seems something wrong with the dataset I used. For the data batch where the error comes out, I print out gt_boxes, and find that there is no gt_boxes.
This is weird, because I download the pascal voc from their website. I'll double check the dataset. |
I finally fix this error I met, it is because I changed the path of config.py file so the program used wrong pkl file when loading data set. Hope this can help you. |
I figured out that rpn_loss_box and rpn_class_box are returning nan, have you any idea why is that happening? @jwyang |
And I am pretty sure about my ground truth boxes are true, I think there is a problem with rois and rpn. |
@artuncF I thought the new commit already solved this issue, but it turns out not. I will check again. |
Try verifying that |
@artuncF tried lowering the learning rate? |
I solved the problem. The problem is related to my annotation files now network working accurately. |
I met the same problem. Simply we can skip that batch. |
@jwyang Hi, I met this bug when training on the coco2017, and I find it was caused by a bug in roibatchLoader.py. To keep the same ratio of images within the same batch, You crop them randomly through this block of code. However, when the picture is too long and objects are nearby the edge of the image, it is prone to crop the objects and thus the number of bounding boxes becomes zero. Here I give an example: Because the cropping is random, simply skipping such batch is a good idea. |
cleaning up the data/cache files helped me on this.. |
I have same issue when training on own dataset |
|
check #594 |
When I tried to train the network with my own dataset which is actually https://github.com/udacity/self-driving-car/tree/master/annotations, this problem occurs after 900 iterations in the first epoch. Can you point out the source of problem?
The text was updated successfully, but these errors were encountered: