-
Notifications
You must be signed in to change notification settings - Fork 7.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Greater IoU than Recall? #35
Comments
BTW, I've also noticed that learning rate would be 10 times larger after 100 iterations, for example when I set learning rate to be 0.0001 like in the example, it automatically changes to 0.001 after 100 iterations and the network diverges. So I had to set learning rate to 0.00001 so that learning rate would be 0.0001 and the network worked just fine. Is it programmed this way? |
Yes, strictly speaking, the Recall should always be greater than (or equal to) IoU. But Yolo calculates average of the best IoUs instead of average IoU. And calculates True Positives instead of Recall. https://en.wikipedia.org/wiki/Precision_and_recall Yolo calculates average of the best IoUs instead of average IoU. And calculates True Positives instead of Recall.
Line 432 in b3a3e92
|
"the network worked just fine" - It depends on the number of classes and the number of images. For PascalVOC seems optimal values in the How it is programmed - see paragraph 5: #30 (comment) If
|
Well that's why my Recall curve looked so much like the true positive curve. Thank you for you reply though, and your amazing work. I've finished the training on VOC dataset, validated the network on VOC testing set and compared my result to yolo-voc.weights I downloaded. I noticed that although I'm getting about as many true positives and as much average IoU as the downloaded network, my network has noticeably more RPs/Img (about 160 vs 75), so there I have some questions:
|
Hard to say. But also it may be some effect of the Bug on Windows that I corrected just that: 4422399
No, this should not significantly affect performance. |
Thanks for the correction, Alexey, it seems to work... I couldn't tell for now though One last question. Since I'm currently working on autonomous driving, my camera has a really wild angle and a weird aspect ratio of about 3:1, so, For now I'm getting an average IoU of about 65% on my data set, and that's not so good when it detects object for autonomous driving. I wonder if I could improve this somehow. Again, thank you for your amazing work and amazing answers. |
You can try to set Line 4 in 4e9798d
I used Yolo to detection wide image (stitched 8 cameras) with wide-angle ~200, but I divide it to many 416x416 square images and run Yolo for each square-image on separate 4 GPU. I think if your training-dataset has the same aspect ration 3:1 such as detection-dataset, then you should use square resolution 416x416. To increase IoU:
|
Thank you so much for your answer, I will try them out. |
Hi @PatricLee |
Hi @iraadit , sorry for the late reply. But if you are in the same scenario as I am, where all the data have the same aspect ratio, then maybe Alexey is right, it makes no point that you train a non-square network that has the same aspect ratio as the data, instead of a square one. |
Why we should not train the model with new calculated anchors?
How can we calculate this when each image has its own width and height? |
i am training 5 classes on cpu(intel core i7-5500 2.4Ghz) 8gb ram. how many pictures should i train per classes to have good result? and how long will it take to finish? |
Not sure why you chose this closed issue to post your question. But I would argue that you cannot possibly train a 5-class network on a CPU. It would take weeks if not months to train. Get yourself a decent GPU, or rent one from Amazon AWS, Linode, Google, Azure, etc... See this recent post I made about a 2-class network. It took 4 hours to train a network with a GPU, but it would have taken 16 days on my 16-core 3.2 GHz CPU: https://www.ccoderun.ca/programming/2020-01-04_neural_network_training/ |
Hi,
I'm training YOLO over VOC 2007 & 2012. While I want to get the curve of IoU and Recall, I validated every weight in /backup, but I noticed immediately after validating on 2007_test with yolo-voc_100.weights, that the average IoU is 44.04% and Recall 29.57%.
As I see it, IoU is Area of Overlap/Area of Union, and Recall is Area of Overlap/Area of Object, and since Area of Object is no larger than Area of Union, Recall should always be greater than (or equal to) IoU , yet my data shows something different.
Please tell me which part did I get it wrong.
The text was updated successfully, but these errors were encountered: