Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Got all evaluation results -1 on custom dateset. #263

Closed
3 tasks done
RalphGuo opened this issue Nov 8, 2022 · 3 comments
Closed
3 tasks done

Got all evaluation results -1 on custom dateset. #263

RalphGuo opened this issue Nov 8, 2022 · 3 comments
Labels
question Further information is requested

Comments

@RalphGuo
Copy link

RalphGuo commented Nov 8, 2022

Prerequisite

💬 Describe the reimplementation questions

I tried to train my own data using Yolov5, however, in every validation stage, I got following results, all of them are -1.

Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = -1.000
...
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
11/08 16:40:01 - mmengine - INFO - bbox_mAP_copypaste: -1.000 -1.000 -1.000 -1.000 -1.000 -1.000

I use my training data for validation and it's the same, so I dont think it's due to a bad training;
I checked my label.json, the area is normal;

Here is my config, Im new to openMMLab, so I only changed a little from tutorial of balloon det, and also there's only 1 category in my data.

base = './yolov5_s-v61_syncbn_fast_8xb16-300e_coco.py'
data_root = '../Dataset/uf_easy/'
img_scale = (640, 640)
deepen_factor = 0.33
widen_factor = 0.5
max_epochs = 300

metainfo = {
'CLASSES': ('uf', ),
'PALETTE': [
(220, 20, 60),
]
}

train_dataloader = dict(
batch_size=train_batch_size_per_gpu,
num_workers=train_num_worker,
dataset=dict(
data_root=data_root,
metainfo=metainfo,
data_prefix=dict(img='train/'),
ann_file='train.json'))
val_dataloader = dict(
batch_size=train_batch_size_per_gpu,
num_workers=train_num_worker,
dataset=dict(
data_root=data_root,
metainfo=metainfo,
data_prefix=dict(img='train/'),
ann_file='train.json'))
test_dataloader = val_dataloader
val_evaluator = dict(ann_file=data_root + 'train.json')
test_evaluator = val_evaluator
model = dict(bbox_head=dict(head_module=dict(num_classes=1)))

Environment

TorchVision: 0.10.0+cu111
OpenCV: 4.5.3
MMEngine: 0.1.0

Expected results

No response

Additional information

No response

@hhaAndroid
Copy link
Collaborator

@RalphGuo

  • COCO Dataset, AP or AR = -1
    1. According to the definition of COCO dataset, the small and medium areas in an image are less than 1024 (32*32), 9216 (96*96), respectively.
    2. If the corresponding area has no object, the result of AP and AR will set to -1.

@RalphGuo
Copy link
Author

RalphGuo commented Nov 9, 2022

2. the corresponding area has no object, the result of AP and AR will set to -1.

@hhaAndroid Hi, thanks for the quick reply.
i. I checked my json file, the areas are normal, it range from 7k~110k, so I dont think that's the problem;
ii. As I posted, I use the same data in train_dataloader and val_dataloader/test_dataloader, I should've got a beautiful evaluation result right? Is there any chance that the correspounding area has no obj on this condition ?

@hhaAndroid hhaAndroid added the question Further information is requested label Nov 9, 2022
@hhaAndroid
Copy link
Collaborator

@RalphGuo This situation is indeed a bit strange. Did you find out why?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants