New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
the reason why 'no detections found by model'? #628
Comments
You need more than 2 epochs to train the network on coco. Even if you are using some pre-trained weights in the encoder. |
yes I used 10epochs, and every detection task (except right after 0epoch) does not work. (refer to image)
and my train.txt and valid.txt file is like this. |
That is indeed weird. Maybe I give it a try in the next days. |
I think because of dataset. I will try training with larger dataset, and let you know soon in here. |
But it should easily overfit on this smaller set. Maybe there is something wrong with the formatting and the import. |
Did you solve this problem? Because I also encountered this error |
On the same dataset? |
@Zzheng-6 not yet. I tried transfer learning with the following conditions.
Result is like this. |
That's really a headache.
Thank you for your reply, can you tell me if you find a solution?
…------------------ 原始邮件 ------------------
发件人: "eriklindernoren/PyTorch-YOLOv3" <notifications@github.com>;
发送时间: 2021年2月14日(星期天) 下午5:34
收件人: "eriklindernoren/PyTorch-YOLOv3"<PyTorch-YOLOv3@noreply.github.com>;
抄送: "两颗西柚"<1349478794@qq.com>;"Mention"<mention@noreply.github.com>;
主题: Re: [eriklindernoren/PyTorch-YOLOv3] the reason why 'no detections found by model'? (#628)
@Zzheng-6 not yet.
I tried transfer learning with the following conditions.
dataset: 500 sampled images of COCO2014 dataset
pretrained weights = darknet53.conv.74
evaluation_interval = 2
| 66 | keyboard | 0.00000 | | 67 | cell phone | 0.00000 | | 68 | microwave | 0.00000 | | 69 | oven | 0.00000 | | 70 | toaster | 0.00000 | | 71 | sink | 0.00001 | | 72 | refrigerator | 0.00007 | | 73 | book | 0.00002 | | 74 | clock | 0.00000 | | 75 | vase | 0.00000 | | 76 | scissors | 0.00000 | | 77 | teddy bear | 0.00003 | | 79 | toothbrush | 0.00000 | +-------+----------------+---------+ ---- mAP 2.601784971674012e-05 Training Epoch 1: 100%|██████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.62it/s] ---- training loss 341.9720458984375 Training Epoch 2: 100%|██████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.51it/s] ---- training loss 304.5881652832031 ---- Evaluating Model ---- Detecting objects: 0%| | 0/63 [00:00<?, ?it/s]
Result is like this.
First detecting task is working, but the second detecting task is not working and just disconnects the server I use.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
I met the same problem, it seems that all the conf are too low, which makes that the map is quite low even the test set are train set |
What is the image size which are you using? |
I use 'image_size = 416'. |
I tried to train it with the coco128 dataset, but it takes significantly longer than 10 Epochs to make useful detections. Because of the low object probability for a given cell, it's best for the network to predict nothing in the beginning. I also updated the loss function. Maybe try again. |
This looks like an issue due to the evaluation. During training, no non-maximum suppression is applied. Evaluation on the other hand uses nms. The nms scales not well for large quantities of predicted objects. So if the network predicts many wrong objects in the beginning it is able to get "stuck" in the evaluation. I lowered the timeout for the nms estimation because a wrong nms doesn't matter that much in a situation where we have unrealistic amounts of objects. |
I want to do transfer learning with using Darknet53.conv.74 backbone, coco128 datasets.
This is the result of $ python train.py --data_config data/coco.data --pretrained_weights weights/darknet53.conv.74.
Here, why does detection not work? (I didn't change code.)
The text was updated successfully, but these errors were encountered: