-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KeyError: 'PackDetInputs is not in the transform registry. Please check whether the value of PackDetInputs
is correct or it was registered as expected. More details can be found at https://mmengine.readthedocs.io/en/latest/advanced_tutorials/config.html#import-the-custom-module'
#10900
Comments
I am using mmdet version 3.0.0 and mmcbv-full version 2.0.2 |
@BIGWangYuDong can you please help me out with your valuable suggestion? |
Has this issue been resolved? |
me too |
I encountered a similar problem. It appears that the sequence in which the models are loaded can impact this. In my experience, I initially loaded the mmpose model after mmdet model. It's possible that, during the 2nd loading process, mmengine changed its scope to mmpose, which caused issues when trying to run inference for the mmdet model. However, when I first loaded the mmpose model and then the detector model, followed by running the detector inference and subsequently the pose estimation, the error was resolved. |
KeyError: 'PackDetInputs is not in the mmpose::transform registry. Please check whether the value of |
KeyError: 'PackDetInputs is not in the mmengine::transform registry. Please check whether the value of `PackDetInputs` is correct or it was registered as expected. More details can be found at https://mmengine.readthedocs.io/en/latest/advanced_tutorials/config.html#import-the-custom-module'
My issue is not resolved. I am using mmdet version 3.0.0 and mmcv version 2.0.0rc4.
this is my hrnet config
model = dict(
type='FasterRCNN',
data_preprocessor=dict(
type='DetDataPreprocessor',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
bgr_to_rgb=True,
pad_size_divisor=32),
backbone=dict(
type='HRNet',
extra=dict(
stage1=dict(
num_modules=1,
num_branches=1,
block='BOTTLENECK',
num_blocks=(4, ),
num_channels=(64, )),
stage2=dict(
num_modules=1,
num_branches=2,
block='BASIC',
num_blocks=(4, 4),
num_channels=(18, 36)),
stage3=dict(
num_modules=4,
num_branches=3,
block='BASIC',
num_blocks=(4, 4, 4),
num_channels=(18, 36, 72)),
stage4=dict(
num_modules=3,
num_branches=4,
block='BASIC',
num_blocks=(4, 4, 4, 4),
num_channels=(18, 36, 72, 144))),
init_cfg=dict(
type='Pretrained', checkpoint='open-mmlab://msra/hrnetv2_w18')),
neck=dict(type='HRFPN', in_channels=[18, 36, 72, 144], out_channels=256),
rpn_head=dict(
type='RPNHead',
in_channels=256,
feat_channels=256,
anchor_generator=dict(
type='AnchorGenerator',
scales=[8],
ratios=[0.5, 1.0, 2.0],
strides=[4, 8, 16, 32, 64]),
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0.0, 0.0, 0.0, 0.0],
target_stds=[1.0, 1.0, 1.0, 1.0]),
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
roi_head=dict(
type='StandardRoIHead',
bbox_roi_extractor=dict(
type='SingleRoIExtractor',
roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
bbox_head=dict(
type='Shared2FCBBoxHead',
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=1,
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0.0, 0.0, 0.0, 0.0],
target_stds=[0.1, 0.1, 0.2, 0.2]),
reg_class_agnostic=False,
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type='L1Loss', loss_weight=1.0))),
train_cfg=dict(
rpn=dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.7,
neg_iou_thr=0.3,
min_pos_iou=0.3,
match_low_quality=True,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=256,
pos_fraction=0.5,
neg_pos_ub=-1,
add_gt_as_proposals=False),
allowed_border=-1,
pos_weight=-1,
debug=False),
rpn_proposal=dict(
nms_pre=2000,
max_per_img=1000,
nms=dict(type='nms', iou_threshold=0.7),
min_bbox_size=0),
rcnn=dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False)),
test_cfg=dict(
rpn=dict(
nms_pre=1000,
max_per_img=1000,
nms=dict(type='nms', iou_threshold=0.7),
min_bbox_size=0),
rcnn=dict(
score_thr=0.05,
nms=dict(type='nms', iou_threshold=0.5),
max_per_img=100)))
dataset_type = 'CocoDataset'
data_root = '/mnt/2tb/General/Niharika/Experiment/mmdet3.0/model_config/coco/'
backend_args = None
train_pipeline = [
dict(type='LoadImageFromFile', backend_args=None),
dict(type='LoadAnnotations', with_bbox=True),
dict(type='Resize', scale=(1333, 800), keep_ratio=True),
dict(type='RandomFlip', prob=0.5),
dict(type='PackDetInputs')
]
test_pipeline = [
dict(type='LoadImageFromFile', backend_args=None),
dict(type='Resize', scale=(1333, 800), keep_ratio=True),
dict(type='LoadAnnotations', with_bbox=True),
dict(
type='PackDetInputs',
meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
'scale_factor'))
]
train_dataloader = dict(
batch_size=1,
num_workers=2,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=True),
batch_sampler=dict(type='AspectRatioBatchSampler'),
dataset=dict(
type='CocoDataset',
data_root=
'/mnt/2tb/General/Niharika/Experiment/mmdet3.0/model_config/coco',
ann_file=
'/mnt/2tb/General/Niharika/Experiment/mmdet3.0/model_config/coco/annotations/instances_train2017.json',
data_prefix=dict(img='train2017/'),
filter_cfg=dict(filter_empty_gt=True, min_size=32),
pipeline=[
dict(type='LoadImageFromFile', backend_args=None),
dict(type='LoadAnnotations', with_bbox=True),
dict(type='Resize', scale=(1333, 800), keep_ratio=True),
dict(type='RandomFlip', prob=0.5),
dict(type='PackDetInputs')
],
metainfo=dict(classes=('ingots', ), palette=[(220, 20, 60)]),
backend_args=None))
val_dataloader = dict(
batch_size=1,
num_workers=2,
persistent_workers=True,
drop_last=False,
sampler=dict(type='DefaultSampler', shuffle=False),
dataset=dict(
type='CocoDataset',
data_root=
'/mnt/2tb/General/Niharika/Experiment/mmdet3.0/model_config/coco',
ann_file=
'/mnt/2tb/General/Niharika/Experiment/mmdet3.0/model_config/coco/annotations/instances_val2017.json',
data_prefix=dict(img='val2017/'),
test_mode=True,
pipeline=[
dict(type='LoadImageFromFile', backend_args=None),
dict(type='Resize', scale=(1333, 800), keep_ratio=True),
dict(type='LoadAnnotations', with_bbox=True),
dict(
type='PackDetInputs',
meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
'scale_factor'))
],
metainfo=dict(classes=('ingots', ), palette=[(220, 20, 60)]),
backend_args=None))
test_dataloader = dict(
batch_size=1,
num_workers=2,
persistent_workers=True,
drop_last=False,
sampler=dict(type='DefaultSampler', shuffle=False),
dataset=dict(
type='CocoDataset',
data_root=
'/mnt/2tb/General/Niharika/Experiment/mmdet3.0/model_config/coco',
ann_file=
'/mnt/2tb/General/Niharika/Experiment/mmdet3.0/model_config/coco/annotations/instances_val2017.json',
data_prefix=dict(img='val2017/'),
test_mode=True,
pipeline=[
dict(type='LoadImageFromFile', backend_args=None),
dict(type='Resize', scale=(1333, 800), keep_ratio=True),
dict(type='LoadAnnotations', with_bbox=True),
dict(
type='PackDetInputs',
meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
'scale_factor'))
],
metainfo=dict(classes=('ingots', ), palette=[(220, 20, 60)]),
backend_args=None))
val_evaluator = dict(
type='CocoMetric',
ann_file=
'/mnt/2tb/General/Niharika/Experiment/mmdet3.0/model_config/coco/annotations/instances_val2017.json',
metric='bbox',
format_only=False,
backend_args=None)
test_evaluator = dict(
type='CocoMetric',
ann_file=
'/mnt/2tb/General/Niharika/Experiment/mmdet3.0/model_config/coco/annotations/instances_val2017.json',
metric='bbox',
format_only=False,
backend_args=None)
train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=24, val_interval=1)
val_cfg = dict(type='ValLoop')
test_cfg = dict(type='TestLoop')
param_scheduler = [
dict(
type='LinearLR', start_factor=0.001, by_epoch=False, begin=0, end=500),
dict(
type='MultiStepLR',
begin=0,
end=24,
by_epoch=True,
milestones=[16, 22],
gamma=0.1)
]
optim_wrapper = dict(
type='OptimWrapper',
optimizer=dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001))
auto_scale_lr = dict(enable=False, base_batch_size=16)
default_scope = 'mmdet'
default_hooks = dict(
timer=dict(type='IterTimerHook'),
logger=dict(type='LoggerHook', interval=50),
param_scheduler=dict(type='ParamSchedulerHook'),
checkpoint=dict(type='CheckpointHook', interval=1),
sampler_seed=dict(type='DistSamplerSeedHook'),
visualization=dict(type='DetVisualizationHook'))
env_cfg = dict(
cudnn_benchmark=False,
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
dist_cfg=dict(backend='nccl'))
vis_backends = [dict(type='LocalVisBackend')]
visualizer = dict(
type='DetLocalVisualizer',
vis_backends=[dict(type='LocalVisBackend')],
name='visualizer')
log_processor = dict(type='LogProcessor', window_size=50, by_epoch=True)
log_level = 'INFO'
load_from = None
resume = False
max_epochs = 24
classes = ('ingots', )
launcher = 'none'
work_dir = './work_dirs/config_3.0.0_hrnet'
…________________________________
From: linlin ***@***.***>
Sent: 02 November 2023 14:50
To: open-mmlab/mmdetection ***@***.***>
Cc: Niharika Soni ***@***.***>; Mention ***@***.***>
Subject: Re: [open-mmlab/mmdetection] KeyError: 'PackDetInputs is not in the transform registry. Please check whether the value of `PackDetInputs` is correct or it was registered as expected. More details can be found at https://mmengine.readthedocs.io/en/lates...
KeyError: 'PackDetInputs is not in the mmpose::transform registry. Please check whether the value of PackDetInputs is correct or it was registered as expected. More details can be found at https://mmengine.readthedocs.io/en/latest/advanced_tutorials/config.html#import-the-custom-module'
—
Reply to this email directly, view it on GitHub<#10900 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/API25N7YII7XOE6RQIN4ZBTYCNQWJAVCNFSM6AAAAAA4QBJPW6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTOOJQGM2TKMRZGE>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Yea, that's what I met, this is indicating that you are running mmdet models under mmpose scope. Try to modify the order of your code. I don't know if I'm right about this, maybe mmlab's developer can answer this better, but I didn't find anything in mmengine that can change the scope of your registry when you are switching between mmdet and mmpose models. Maybe try to load the model or declear the model or read the model config right before you run inference or other part of the det models. |
A hacky way is to register the modules every time you run inference on the mmdet model.
|
Thanks for the clue #10900 (comment) ! The detector pipeline has to fixed for use in the mmpose scope:
|
you should import all the transform that used in pipeline. For example: import os.path as osp pipeline_cfg = [ in the scrip above, I write several transform to apply on video preprocessing in my_transform.py, if you don't import these transform, this error will occur |
In conclusion, to fix this issue, you have to import all the transform in pipeline |
Thanks for the information! This works! Now I don't have to load my detector everytime! |
You're welcome!
Vào 2:24, Th 4, 31 thg 1, 2024 Elvira ***@***.***> đã viết:
… detector.cfg = adapt_mmdet_pipeline(detector.cfg)
Thanks for the information! This works! Now I don't have to load my
detector everytime!
—
Reply to this email directly, view it on GitHub
<#10900 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A6AMVT3OQNV6477RJQVOCJDYRFCFVAVCNFSM6AAAAAA4QBJPW6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMJXG4ZTGNZTGQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
No description provided.
The text was updated successfully, but these errors were encountered: