-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ValueError: need at least one array to concatenate #3628
Comments
Hi, concatenation from an empty array will raise this error, and there is size mismatch between your model and loaded state dict (e.g., the number of classes 2 vs 81). Will correcting the size mismatch help? |
Thanks for responding so quickly! According to #1151, the size mismatch warning is expected when using pre-trained models, and I have specified the altered classes in the configs. How would I go about correcting the size mismatch while using a pre-trained model? |
maybe you need to modify the |
@WAMAWAMA how can I modify class_names.py for my custom dataset, I followed this tutorial and got a bunch of bugs |
I encountered the same problem, did you solve it? |
Add your dataset labels as the new function then add it to def toby_classes():
return ['husky',''chihuahua','alaska']
dataset_aliases = {
'voc': ['voc', 'pascal_voc', 'voc07', 'voc12'],
'imagenet_det': ['det', 'imagenet_det', 'ilsvrc_det'],
'imagenet_vid': ['vid', 'imagenet_vid', 'ilsvrc_vid'],
'coco': ['coco', 'mscoco', 'ms_coco'],
'wider_face': ['WIDERFaceDataset', 'wider_face', 'WDIERFace'],
'cityscapes': ['cityscapes'],
'toby': ['toby']
} Then add it to from .class_names import (cityscapes_classes, coco_classes, dataset_aliases,
get_classes, imagenet_det_classes,
imagenet_vid_classes, voc_classes, toby_classes)
from .eval_hooks import DistEvalHook, EvalHook
from .mean_ap import average_precision, eval_map, print_map_summary
from .recall import (eval_recalls, plot_iou_recall, plot_num_recall,
print_recall_summary)
__all__ = [
'voc_classes', 'imagenet_det_classes', 'imagenet_vid_classes',
'coco_classes', 'cityscapes_classes', 'dataset_aliases', 'get_classes',
'DistEvalHook', 'EvalHook', 'average_precision', 'eval_map',
'print_map_summary', 'eval_recalls', 'print_recall_summary',
'plot_num_recall', 'plot_iou_recall', 'toby_classes'
] Then go to `mmdet/core/evaluation/mean_ap.py and add one more line at line 436: if dataset is None:
label_names = [str(i) for i in range(num_classes)]
elif mmcv.is_str(dataset):
dataset = 'toby' # this line
label_names = get_classes(dataset)
else:
label_names = dataset Note: this is just the temporary solution, I just too busy to deep inside the code, you should better find a more general way. |
I encountered the same error and solved it, probably because the class name in you annotation file doesn't math the class name in you config file(in your config file it is called 'spine' ) , so please check your annotation file |
@MeepoAII I used a custom dataset, so there is no class name in the annotation file. And finally, I can see a guy who committed in this repository appear, where did you guys go, there are a hundred issues wait to solve |
emmm, I'm not a member of mmdetection, I'm just a contributor。。。。 btw, PR is welcome~~ @GiangHLe |
@MeepoAII I met the same error, I tried a lot of methods but it didn't work, can you help me? |
I just solve the problem according to Use: python setup.py install |
I find the problem. |
No,not that way... ValueError: need at least one array to concatenate #3628. I found one of the reasons for the possible error: If you are using mmdetection's docker image like me, you should use the command before each training in this training root:
|
You should use One of the common reason make the loss become Nan is the so high learning rate that makes the divergence problem, reduce the learning rate sometimes can solve the problem. |
I tried everything above but noting works , please help me out |
@Sourav2ch One more reason it might happen is if the user do want to use ground truth masks but the config file if used by default might have flags as: |
I generate import random
import json
anno = {
"images": [],
"annotations": []
}
anno['categories'] = [
{
"id": 1,
"name": "person"
},
{
"id": 2,
"name": "bicycle"
}
]
for idx, i in enumerate(range(1), 1):
cls = random.randint(0, 1)
color = None
if cls == 0:
color = '#FFFFFF'
else:
color = "#FF0000"
im1 = Image.new('RGB', (30, 30), color)
im = Image.new('RGB', (50, 50), "#000000")
x, y = random.randint(0, 10), random.randint(0, 10)
im.paste(im1, (x, y))
im.save('data/images/' + '{}.png'.format(i))
image = {}
info = {}
image['id'] = idx
image['width'] = 50
image['height'] = 50
image['file_name'] = '{}.png'.format(i)
info['id'] = idx
info["image_id"] = idx
info["category_id"] = cls + 1
info["area"] = 900
info["bbox"] = [x, y, 30, 30]
info["iscrowd"] = 0
info["segmentation"] = None
anno['images'].append(image)
anno['annotations'].append(info)
with open('data/train.json', 'w') as f:
json.dump(anno, f, indent=4) Then I have changed classes info, |
Maybe I got it, the image is too small, when I adjusted it size to |
* refine download sh to py * update QUICK_STARTED
I had the same error, the issue was my class names in "categories" was |
AttributeError: module 'mmdet.core.evaluation' has no attribute 'MyDataset_classes' Any idea to solve it? |
For anyone still with this issue, it may be because the format of your annotations file is incorrect. In my case, my coco json files did not have an "images" key. |
I feel that this would be resolved with better error handling, i.e., "class x present in annotation file, but was not found in dataset_x.py" |
Fixed by modifying classes in |
i got error like this when i tried to train using my custom dataset Traceback (most recent call last): |
The post by cpwan about issue #9610 helped me to solve " ValueError: need at least one array to concatenate" when running the official custom dataset for mmdet . He found why The metainfo has been changed to lowercase since #9469.
}
|
Thanks, martinurbieta. |
I met the same question as you. But the latest version has been changed to lowcase as the comment above. This error still come out. |
In my case I missed a metainfo key in config file, per this guide:
|
Thank you for your reply. Maybe not the case. The image I input to the model was too large. Even using the transform to resize it could not work it out. After resizing it, this problem could be fixed. This model is different from the mask-rcnn which requires less memory. Perhaps this is the case. |
I am having an issue with training after transforming my labeled data into COCO format.
Other threads with the same issue have been closed (e.g. #210), but none of the proposed solutions fix my error.
Error in Question
python tools/train.py ./configs/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_3x_coco.py --work-dir tutorial_exps --resume-from ./checkpoints/faster_rcnn_r50_caffe_fpn_mstrain_3x_coco_bbox_mAP-0.398_20200504_163323-30042637.pth
2020-08-26 14:08:56,855 - mmdet - INFO - load model from: open-mmlab://detectron2/resnet50_caffe
2020-08-26 14:08:56,928 - mmdet - WARNING - The model and loaded state dict do not match exactly
unexpected key in source state_dict: conv1.bias
loading annotations into memory...
Done (t=0.35s)
creating index...
index created!
loading annotations into memory...
Done (t=0.34s)
creating index...
index created!
2020-08-26 14:09:00,231 - mmdet - INFO - load checkpoint from ./checkpoints/faster_rcnn_r50_caffe_fpn_mstrain_3x_coco_bbox_mAP-0.398_20200504_163323-30042637.pth
2020-08-26 14:09:00,349 - mmdet - WARNING - The model and loaded state dict do not match exactly
size mismatch for roi_head.bbox_head.fc_cls.weight: copying a param with shape torch.Size([81, 1024]) from checkpoint, the shape in current model is torch.Size([2, 1024]).
size mismatch for roi_head.bbox_head.fc_cls.bias: copying a param with shape torch.Size([81]) from checkpoint, the shape in current model is torch.Size([2]).
size mismatch for roi_head.bbox_head.fc_reg.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([4, 1024]).
size mismatch for roi_head.bbox_head.fc_reg.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([4]).
2020-08-26 14:09:00,350 - mmdet - INFO - resumed epoch 36, iter 263880
<torch.utils.data.dataloader.DataLoader object at 0x7fa21705a510> load
2020-08-26 14:09:00,350 - mmdet - INFO - Start running, host: ding@ding-ROG-STRIX-Z390-E-GAMING, work_dir: /home/ding/SpineQuant/SpineQuant/mmdetection/tutorial_exps
2020-08-26 14:09:00,350 - mmdet - INFO - workflow: [('train', 1)], max: 100 epochs
Traceback (most recent call last):
File "tools/train.py", line 289, in
main()
File "tools/train.py", line 285, in main
meta=meta)
File "/home/ding/SpineQuant/SpineQuant/mmdetection/mmdet/apis/train.py", line 144, in train_detector
runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
File "/home/ding/anaconda3/envs/mmlab/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 122, in run
epoch_runner(data_loaders[i], **kwargs)
File "/home/ding/anaconda3/envs/mmlab/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 27, in train
for i, data_batch in enumerate(data_loader):
File "/home/ding/anaconda3/envs/mmlab/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 278, in iter
return _MultiProcessingDataLoaderIter(self)
File "/home/ding/anaconda3/envs/mmlab/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 709, in init
self._try_put_index()
File "/home/ding/anaconda3/envs/mmlab/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 826, in _try_put_index
index = self._next_index()
File "/home/ding/anaconda3/envs/mmlab/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 318, in _next_index
return next(self._sampler_iter) # may raise StopIteration
File "/home/ding/anaconda3/envs/mmlab/lib/python3.7/site-packages/torch/utils/data/sampler.py", line 200, in iter
for idx in self.sampler:
File "/home/ding/SpineQuant/SpineQuant/mmdetection/mmdet/datasets/samplers/group_sampler.py", line 40, in iter
indices = np.concatenate(indices)
File "<array_function internals>", line 6, in concatenate
ValueError: need at least one array to concatenate
CONFIGS
2020-08-26 14:08:55,796 - mmdet - INFO - Distributed training: False
2020-08-26 14:08:56,589 - mmdet - INFO - Config:
model = dict(
type='FasterRCNN',
pretrained='open-mmlab://detectron2/resnet50_caffe',
backbone=dict(
type='ResNet',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=False),
norm_eval=True,
style='caffe'),
neck=dict(
type='FPN',
in_channels=[256, 512, 1024, 2048],
out_channels=256,
num_outs=5),
rpn_head=dict(
type='RPNHead',
in_channels=256,
feat_channels=256,
anchor_generator=dict(
type='AnchorGenerator',
scales=[8],
ratios=[0.5, 1.0, 2.0],
strides=[4, 8, 16, 32, 64]),
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0.0, 0.0, 0.0, 0.0],
target_stds=[1.0, 1.0, 1.0, 1.0]),
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
roi_head=dict(
type='StandardRoIHead',
bbox_roi_extractor=dict(
type='SingleRoIExtractor',
roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
bbox_head=dict(
type='Shared2FCBBoxHead',
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=1,
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0.0, 0.0, 0.0, 0.0],
target_stds=[0.1, 0.1, 0.2, 0.2]),
reg_class_agnostic=False,
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type='L1Loss', loss_weight=1.0))))
train_cfg = dict(
rpn=dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.7,
neg_iou_thr=0.3,
min_pos_iou=0.3,
match_low_quality=True,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=256,
pos_fraction=0.5,
neg_pos_ub=-1,
add_gt_as_proposals=False),
allowed_border=-1,
pos_weight=-1,
debug=False),
rpn_proposal=dict(
nms_across_levels=False,
nms_pre=2000,
nms_post=1000,
max_num=1000,
nms_thr=0.7,
min_bbox_size=0),
rcnn=dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False))
test_cfg = dict(
rpn=dict(
nms_across_levels=False,
nms_pre=1000,
nms_post=1000,
max_num=1000,
nms_thr=0.7,
min_bbox_size=0),
rcnn=dict(
score_thr=0.05,
nms=dict(type='nms', iou_threshold=0.5),
max_per_img=100))
dataset_type = 'CocoDataset'
data_root = '/home/ding/SpineQuant/SpineQuant/mmdetection/'
img_norm_cfg = dict(
mean=[103.53, 116.28, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True),
dict(
type='Resize',
img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736),
(1333, 768), (1333, 800)],
multiscale_mode='value',
keep_ratio=True),
dict(type='RandomFlip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[103.53, 116.28, 123.675],
std=[1.0, 1.0, 1.0],
to_rgb=False),
dict(type='Pad', size_divisor=32),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(1333, 800),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(
type='Normalize',
mean=[103.53, 116.28, 123.675],
std=[1.0, 1.0, 1.0],
to_rgb=False),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
])
]
data = dict(
samples_per_gpu=2,
workers_per_gpu=2,
train=dict(
type='CocoDataset',
ann_file='train.json',
img_prefix='images2D',
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True),
dict(
type='Resize',
img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736),
(1333, 768), (1333, 800)],
multiscale_mode='value',
keep_ratio=True),
dict(type='RandomFlip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[103.53, 116.28, 123.675],
std=[1.0, 1.0, 1.0],
to_rgb=False),
dict(type='Pad', size_divisor=32),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
],
data_root='/home/ding/SpineQuant/SpineQuant/mmdetection/training/',
classes=['Spine']),
val=dict(
type='CocoDataset',
ann_file='train.json',
img_prefix='images2D',
pipeline=[
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(1333, 800),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(
type='Normalize',
mean=[103.53, 116.28, 123.675],
std=[1.0, 1.0, 1.0],
to_rgb=False),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
])
],
data_root='/home/ding/SpineQuant/SpineQuant/mmdetection/training/',
classes=['Spine']),
test=dict(
type='CocoDataset',
ann_file='train.json',
img_prefix='images2D',
pipeline=[
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(1333, 800),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(
type='Normalize',
mean=[103.53, 116.28, 123.675],
std=[1.0, 1.0, 1.0],
to_rgb=False),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
])
],
data_root='/home/ding/SpineQuant/SpineQuant/mmdetection/training/',
classes=['Spine']))
evaluation = dict(interval=10, metric='mAP')
optimizer = dict(type='SGD', lr=0.0025, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=None)
lr_config = dict(
policy='step',
warmup=None,
warmup_iters=500,
warmup_ratio=0.001,
step=[28, 34])
total_epochs = 100
checkpoint_config = dict(interval=5)
log_config = dict(interval=5, hooks=[dict(type='TextLoggerHook')])
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = None
resume_from = './checkpoints/faster_rcnn_r50_caffe_fpn_mstrain_3x_coco_bbox_mAP-0.398_20200504_163323-30042637.pth'
workflow = [('train', 1)]
work_dir = 'tutorial_exps'
gpu_ids = range(0, 1)
Environment Information
2020-08-26 14:08:55,795 - mmdet - INFO - Environment info:
sys.platform: linux
Python: 3.7.7 (default, May 7 2020, 21:25:33) [GCC 7.3.0]
CUDA available: True
CUDA_HOME: /usr/local/cuda-9.2
NVCC: Cuda compilation tools, release 9.2, V9.2.148
GPU 0: GeForce RTX 2080
GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
PyTorch: 1.3.1
PyTorch compiling details: PyTorch built with:
TorchVision: 0.4.2
OpenCV: 4.1.1
MMCV: 1.0.5
MMDetection: 2.3.0+ae453fa
MMDetection Compiler: GCC 7.3
MMDetection CUDA Compiler: 9.2
From digging around the files listed in the traceback and printing out values, it looks like the data loaders in tools/train.py aren't collecting the images and annotations from the COCO json file, leaving an empty array for the concatenation. I have followed the instructions here to format the COCO annotations and have verified its validity with other examples online (such as the other threads with this same error), but I have not found a cause for this error. If i use a CustomDataset, I do not receive this error (training doesn't succeed with my custom labels so I tried to switch to COCO for standardized formatting and more solved errors).
Any help or reference is much appreciated!
The text was updated successfully, but these errors were encountered: