-
Notifications
You must be signed in to change notification settings - Fork 7.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Low loss and low MAP for damage detection #4359
Comments
What mAP do you get on Training dataset? |
Your model is trained very well. https://github.com/AlexeyAB/darknet#how-to-improve-object-detection
|
Yes I understand this. The problem is that the object that I am trying to identify has too many variations. Damage is not a well defined object by clear features like cars, animals and others. My question is what do you suggest in order to have a better object detector? Should I only increase the number of images or can I optimize some parameters in the config file? |
You use bad images in your training dataset. Did you get Training and Validation datasets by evenly randomly dividing a Single dataset in a Training 80% and Validation 20% of images? You should do this. |
Yes the training and validation sets come from a random split 80%/20% of the same dataset. |
I have used the same code for another project that worked perfectly. Here is the python code for the spit (I masked some parts of the code): #split train/validation for early_stopping and create text files with relative paths import os image_folder = MASKED!!! #Split 80% train/20% validation split_index = int(0.8*len(image_list)) #We dont want the same claim number in both training and validation (image title starts with a 9 digit code) training,validation = image_list[:split_index],image_list[split_index:] def write_file(data,name): write_file(training,'train2.txt') |
So I think that your dataset is too small. Also what mAP@0.05 can you get by using |
Accuracy on the validation dataset is very low even at low iou_thresholds. While the accuracy on the training dataset is very high. This means that your training set is not representative - there are very few objects in it that you want to detect. Just collect 4-8x more images and follow these rules: #4359 (comment) |
Thank you! I will increase the number of images and post if I need some help later. |
I'm training yolov3 with COCO dataset (only 5 categories):
Problem: In training the error decreases but mAP is always 0.0% COCO dataset has already been used for this type of training, so I don't use bad images in my dataset training, right? Any ideas? |
|
I do not think that damage is object. In COCO terms it is "stuff", I think it can be better use semantic segmentation networks like BiSeNet and others (but its much harder to annotate images). https://paperswithcode.com/sota/real-time-semantic-segmentation-on-cityscapes |
|
Show bad.list and bad_labels.list files |
|
|
|
This is normal. set
|
Ok thank you very much @AlexeyAB Sorry for this silly question but how do I do it on windows? |
Thanks!! =) |
No, it's normal
How can be object with zero size? It's bad.
Its normal. Check your dataet by using Yolo_mark and run training with flag -show_imgs |
Would you look at my conversion scrip of the bounding boxes, please? |
It's normal. This is due data augmentation. Your dataset is correct. Train more using much more images. https://github.com/AlexeyAB/darknet#when-should-i-stop-training |
ok thank you very much for the help! =) |
When I train a YOLOv3 to detect damage on images of cars I get a low average loss but also a really low MAP. The model is overfitting a lot (see below). I have trained the model on 500 images of damaged car.
Any suggestion on how to reduice overfitting for this type of detection? I can increase the number of photos and change the config file. Also since the damage doesn't have any aspect ratio, what are the optimal anchors for this problem? If I increase the number of anchors will it help on the validation set?
The text was updated successfully, but these errors were encountered: