-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
class EpochBasedTrainLoop
in mmengine/runner/loops.py: class CocoDataset
in mmdet/datasets/coco.py: need at least one array to concatenate
#9610
Comments
seems to be the same issue with #9613 |
Found out why. The metainfo has been changed to lowercase since #9469. On the custom dataset, use lowercase letters for keys in the dictionary instead. cfg.metainfo = {
'CLASSES': ('balloon', ),
'PALETTE': [
(220, 20, 60),
]
} to cfg.metainfo = {
'classes': ('balloon', ),
'palette': [
(220, 20, 60),
]
} should work. |
This works! Thanks a lot! |
怎么解决的 我一脸懵 |
Hi, the followings is the code stored in configs/detr/detr_r101_100e_coco.py, CODE:
train_pipeline = [ optim_wrapper = dict( max_epochs = 150 param_scheduler = [ auto_scale_lr = dict(base_batch_size=16) |
@JunKaiLiao Did the solution above not work for you? |
@amanikiruga I found out that the class name in anntation file was wrong. After fixing that, it can train without any issue. Thank you!! |
I'm getting the same error, but I'm not using uppercase class names. Can anyone help? Traceback (most recent call last): During handling of the above exception, another exception occurred: Traceback (most recent call last): During handling of the above exception, another exception occurred: Traceback (most recent call last): here's my custom dataset class: @DATASETS.register_module()
|
At the beginning, I corrected it according to the plan given above and it still didn’t work. “pip list“ to check your mmdet installation location;my mmdet is in anaconda3/....../site-packages/mmdet, change the file content in /site-packages/mmdet/coco.py. |
isn't this wrong? |
It's the same error that happened to me.
Please, help me and welcome to any comments. My environment is in the docker desktop in win10 and I used the "Dockerfile" here and installed through it.
config is like this below. # Inherit and overwrite part of the config based on this config
_base_ = './rtmdet_m_8xb32-300e_coco.py'
data_root = '/mmdetection/workspace2/dataset/excavator/' # dataset root
train_batch_size_per_gpu = 4
train_num_workers = 2
max_epochs = 10
stage2_num_epochs = 1
base_lr = 0.00008
metainfo = {
'classes': ('excavator', ),
'palette': [
(220, 20, 60),
]
}
train_dataloader = dict(
batch_size=train_batch_size_per_gpu,
num_workers=train_num_workers,
dataset=dict(
data_root=data_root,
metainfo=metainfo,
data_prefix=dict(img='train/'),
ann_file='train.json'))
val_dataloader = dict(
dataset=dict(
data_root=data_root,
metainfo=metainfo,
data_prefix=dict(img='val/'),
ann_file='val.json'))
test_dataloader = val_dataloader
val_evaluator = dict(ann_file=data_root + 'val.json')
test_evaluator = val_evaluator
model = dict(bbox_head=dict(num_classes=1))
# learning rate
param_scheduler = [
dict(
type='LinearLR',
start_factor=1.0e-5,
by_epoch=False,
begin=0,
end=10),
dict(
# use cosine lr from 10 to 20 epoch
type='CosineAnnealingLR',
eta_min=base_lr * 0.05,
begin=max_epochs // 2,
end=max_epochs,
T_max=max_epochs // 2,
by_epoch=True,
convert_to_iter_based=True),
]
train_pipeline_stage2 = [
dict(type='LoadImageFromFile', backend_args=None),
dict(type='LoadAnnotations', with_bbox=True),
dict(
type='RandomResize',
scale=(640, 640),
ratio_range=(0.1, 2.0),
keep_ratio=True),
dict(type='RandomCrop', crop_size=(640, 640)),
dict(type='YOLOXHSVRandomAug'),
dict(type='RandomFlip', prob=0.5),
dict(type='Pad', size=(640, 640), pad_val=dict(img=(114, 114, 114))),
dict(type='PackDetInputs')
]
# optimizer
optim_wrapper = dict(
_delete_=True,
type='OptimWrapper',
optimizer=dict(type='AdamW', lr=base_lr, weight_decay=0.05),
paramwise_cfg=dict(
norm_decay_mult=0, bias_decay_mult=0, bypass_duplicate=True))
default_hooks = dict(
checkpoint=dict(
interval=5,
max_keep_ckpts=2, # only keep latest 2 checkpoints
save_best='auto'
),
logger=dict(type='LoggerHook', interval=5))
custom_hooks = [
dict(
type='PipelineSwitchHook',
switch_epoch=max_epochs - stage2_num_epochs,
switch_pipeline=train_pipeline_stage2)
]
# load COCO pre-trained weight
load_from = '/mmdetection/checkpoints/rtmdet_m_8xb32-300e_coco_20220719_112220-229f527c.pth'
train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=max_epochs, val_interval=1)
visualizer = dict(vis_backends=[dict(type='LocalVisBackend'),dict(type='TensorboardVisBackend')]) Errros's here: Traceback (most recent call last):
File "/mmdetection/tools/train.py", line 133, in <module>
main()
File "/mmdetection/tools/train.py", line 129, in main
runner.train()
File "/opt/conda/lib/python3.9/site-packages/mmengine/runner/runner.py", line 1701, in train
self._train_loop = self.build_train_loop(
File "/opt/conda/lib/python3.9/site-packages/mmengine/runner/runner.py", line 1493, in build_train_loop
loop = LOOPS.build(
File "/opt/conda/lib/python3.9/site-packages/mmengine/registry/registry.py", line 570, in build
return self.build_func(cfg, *args, **kwargs, registry=self)
File "/opt/conda/lib/python3.9/site-packages/mmengine/registry/build_functions.py", line 144, in build_from_cfg
raise type(e)(
ValueError: class `EpochBasedTrainLoop` in mmengine/runner/loops.py: class `CocoDataset` in mmdet/datasets/coco.py: need at least one array to concatenate |
Prerequisite
Task
I'm using the official example scripts/configs for the officially supported tasks/models/datasets.
Branch
master branch https://github.com/open-mmlab/mmdetection
Environment
torch version: 1.11.0+cu113 cuda: True
mmdetection: 3.0.0rc5
mmcv: 2.0.0rc3
mmengine: 0.4.0
Reproduces the problem - code sample
start training
runner.train()
Reproduces the problem - command or script
start training
runner.train()
Reproduces the problem - error message
loading annotations into memory...
Done (t=0.01s)
creating index...
index created!
ValueError Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/mmengine/registry/build_functions.py in build_from_cfg(cfg, registry, default_args)
120 else:
--> 121 obj = obj_cls(**args) # type: ignore
122
/kaggle/working/mmdetection/mmdet/datasets/base_det_dataset.py in init(self, seg_map_suffix, proposal_file, file_client_args, *args, **kwargs)
32 self.file_client = FileClient(**file_client_args)
---> 33 super().init(*args, **kwargs)
34
/opt/conda/lib/python3.7/site-packages/mmengine/dataset/base_dataset.py in init(self, ann_file, metainfo, data_root, data_prefix, filter_cfg, indices, serialize_data, pipeline, test_mode, lazy_init, max_refetch)
246 if not lazy_init:
--> 247 self.full_init()
248
/kaggle/working/mmdetection/mmdet/datasets/base_det_dataset.py in full_init(self)
70 if self.serialize_data:
---> 71 self.data_bytes, self.data_address = self._serialize_data()
72
/opt/conda/lib/python3.7/site-packages/mmengine/dataset/base_dataset.py in _serialize_data(self)
763 # TODO Check if np.concatenate is necessary
--> 764 data_bytes = np.concatenate(data_list)
765 # Empty cache for preventing making multiple copies of
<array_function internals> in concatenate(*args, **kwargs)
ValueError: need at least one array to concatenate
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/mmengine/registry/build_functions.py in build_from_cfg(cfg, registry, default_args)
120 else:
--> 121 obj = obj_cls(**args) # type: ignore
122
/opt/conda/lib/python3.7/site-packages/mmengine/runner/loops.py in init(self, runner, dataloader, max_epochs, val_begin, val_interval, dynamic_intervals)
42 dynamic_intervals: Optional[List[Tuple[int, int]]] = None) -> None:
---> 43 super().init(runner, dataloader)
44 self._max_epochs = int(max_epochs)
/opt/conda/lib/python3.7/site-packages/mmengine/runner/base_loop.py in init(self, runner, dataloader)
26 self.dataloader = runner.build_dataloader(
---> 27 dataloader, seed=runner.seed, diff_rank_seed=diff_rank_seed)
28 else:
/opt/conda/lib/python3.7/site-packages/mmengine/runner/runner.py in build_dataloader(dataloader, seed, diff_rank_seed)
1332 if isinstance(dataset_cfg, dict):
-> 1333 dataset = DATASETS.build(dataset_cfg)
1334 if hasattr(dataset, 'full_init'):
/opt/conda/lib/python3.7/site-packages/mmengine/registry/registry.py in build(self, cfg, *args, **kwargs)
520 """
--> 521 return self.build_func(cfg, *args, **kwargs, registry=self)
522
/opt/conda/lib/python3.7/site-packages/mmengine/registry/build_functions.py in build_from_cfg(cfg, registry, default_args)
135 raise type(e)(
--> 136 f'class
{obj_cls.__name__}
in ' # type: ignore137 f'{cls_location}.py: {e}')
ValueError: class
CocoDataset
in mmdet/datasets/coco.py: need at least one array to concatenateDuring handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
/tmp/ipykernel_23/3729266276.py in
1 # start training
----> 2 runner.train()
/opt/conda/lib/python3.7/site-packages/mmengine/runner/runner.py in train(self)
1647
1648 self._train_loop = self.build_train_loop(
-> 1649 self._train_loop) # type: ignore
1650
1651 #
build_optimizer
should be called beforebuild_param_scheduler
/opt/conda/lib/python3.7/site-packages/mmengine/runner/runner.py in build_train_loop(self, loop)
1441 loop_cfg,
1442 default_args=dict(
-> 1443 runner=self, dataloader=self._train_dataloader))
1444 else:
1445 by_epoch = loop_cfg.pop('by_epoch')
/opt/conda/lib/python3.7/site-packages/mmengine/registry/registry.py in build(self, cfg, *args, **kwargs)
519 >>> model = MODELS.build(cfg)
520 """
--> 521 return self.build_func(cfg, *args, **kwargs, registry=self)
522
523 def _add_child(self, registry: 'Registry') -> None:
/opt/conda/lib/python3.7/site-packages/mmengine/registry/build_functions.py in build_from_cfg(cfg, registry, default_args)
134 obj_cls.module.split('.')) # type: ignore
135 raise type(e)(
--> 136 f'class
{obj_cls.__name__}
in ' # type: ignore137 f'{cls_location}.py: {e}')
138
ValueError: class
EpochBasedTrainLoop
in mmengine/runner/loops.py: classCocoDataset
in mmdet/datasets/coco.py: need at least one array to concatenateAdditional information
I ran this tutorial (https://github.com/open-mmlab/mmdetection/blob/3.x/demo/MMDet_InstanceSeg_Tutorial.ipynb) without changing any code, but I ran into a problem when I get to runner.train(), how should I fix it?
The text was updated successfully, but these errors were encountered: