Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

assert len(indices) == len(self) #62

Closed
winnerziqi opened this issue Oct 18, 2021 · 15 comments
Closed

assert len(indices) == len(self) #62

winnerziqi opened this issue Oct 18, 2021 · 15 comments

Comments

@winnerziqi
Copy link

hello,
When I use it, raise error: "assert len(indices) == len(self), f"{indices} not equal {len(self)} while offset is: {offset}""
then I print the length info, =====len of indices is 26865 - offset: 0 - len self 36650
below is the detail error info, Please help me.
Traceback (most recent call last): File "tools/train.py", line 198, in <module> main() File "tools/train.py", line 193, in main meta=meta, File "/data6/ziqiwen/code/softteacher/ssod/apis/train.py", line 206, in train_detector runner.run(data_loaders, cfg.workflow) File "/home/ziqiwen/anaconda3/envs/mm/lib/python3.7/site-packages/mmcv/runner/iter_based_runner.py", line 117, in run iter_loaders = [IterLoader(x) for x in data_loaders] File "/home/ziqiwen/anaconda3/envs/mm/lib/python3.7/site-packages/mmcv/runner/iter_based_runner.py", line 117, in <listcomp> iter_loaders = [IterLoader(x) for x in data_loaders] File "/home/ziqiwen/anaconda3/envs/mm/lib/python3.7/site-packages/mmcv/runner/iter_based_runner.py", line 23, in __init__ self.iter_loader = iter(self._dataloader) File "/home/ziqiwen/anaconda3/envs/mm/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 291, in __iter__ return _MultiProcessingDataLoaderIter(self) File "/home/ziqiwen/anaconda3/envs/mm/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 764, in __init__ self._try_put_index() File "/home/ziqiwen/anaconda3/envs/mm/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 994, in _try_put_index index = self._next_index() File "/home/ziqiwen/anaconda3/envs/mm/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 357, in _next_index return next(self._sampler_iter) # may raise StopIteration File "/home/ziqiwen/anaconda3/envs/mm/lib/python3.7/site-packages/torch/utils/data/sampler.py", line 208, in __iter__ for idx in self.sampler: File "/data6/ziqiwen/code/softteacher/ssod/datasets/samplers/semi_sampler.py", line 189, in __iter__ assert len(indices) == len(self) AssertionError Traceback (most recent call last): File "/home/ziqiwen/anaconda3/envs/mm/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/ziqiwen/anaconda3/envs/mm/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/ziqiwen/anaconda3/envs/mm/lib/python3.7/site-packages/torch/distributed/launch.py", line 261, in <module> main() File "/home/ziqiwen/anaconda3/envs/mm/lib/python3.7/site-packages/torch/distributed/launch.py", line 257, in main cmd=cmd)

@MendelXu
Copy link
Collaborator

Could you post the full log here?

@winnerziqi
Copy link
Author

thanks,
here is my full log.

fatal: Not a git repository (or any parent up to mount point /data6)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
2021-10-18 13:36:46,968 - mmdet.ssod - INFO - [<StreamHandler (INFO)>, <FileHandler /data6/ziqiwen/code/softteacher/work_dirs/soft_teacher_faster_rcnn_r50_caffe_fpn_coco_180k/10/1/20211018_133646.log (INFO)>]
2021-10-18 13:36:46,968 - mmdet.ssod - INFO - Environment info:


sys.platform: linux
Python: 3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]
CUDA available: True
GPU 0,1,2,3: TITAN Xp
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 9.0, V9.0.176
GCC: gcc (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609
PyTorch: 1.6.0
PyTorch compiling details: PyTorch built with:

  • GCC 7.3
  • C++ Version: 201402
  • Intel(R) oneAPI Math Kernel Library Version 2021.3-Product Build 20210617 for Intel(R) 64 architecture applications
  • Intel(R) MKL-DNN v1.5.0 (Git Hash e2ac1fac44c5078ca927cb9b90e1b3066a0b2ed0)
  • OpenMP 201511 (a.k.a. OpenMP 4.5)
  • NNPACK is enabled
  • CPU capability usage: AVX2
  • CUDA Runtime 10.1
  • NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37
  • CuDNN 7.6.3
  • Magma 2.5.2
  • Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF,

TorchVision: 0.7.0
OpenCV: 4.5.3
MMCV: 1.3.12
MMCV Compiler: GCC 7.3
MMCV CUDA Compiler: 10.1
MMDetection: 2.16.0+

2021-10-18 13:36:49,973 - mmdet.ssod - INFO - Distributed training: True
2021-10-18 13:36:53,132 - mmdet.ssod - INFO - Config:
model = dict(
type='SoftTeacher',
model=dict(
type='FasterRCNN',
backbone=dict(
type='ResNet',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=False),
norm_eval=True,
style='caffe',
init_cfg=dict(
type='Pretrained',
checkpoint='open-mmlab://detectron2/resnet50_caffe')),
neck=dict(
type='FPN',
in_channels=[256, 512, 1024, 2048],
out_channels=256,
num_outs=5),
rpn_head=dict(
type='RPNHead',
in_channels=256,
feat_channels=256,
anchor_generator=dict(
type='AnchorGenerator',
scales=[8],
ratios=[0.5, 1.0, 2.0],
strides=[4, 8, 16, 32, 64]),
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0.0, 0.0, 0.0, 0.0],
target_stds=[1.0, 1.0, 1.0, 1.0]),
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
roi_head=dict(
type='StandardRoIHead',
bbox_roi_extractor=dict(
type='SingleRoIExtractor',
roi_layer=dict(
type='RoIAlign', output_size=7, sampling_ratio=0),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
bbox_head=dict(
type='Shared2FCBBoxHead',
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=80,
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0.0, 0.0, 0.0, 0.0],
target_stds=[0.1, 0.1, 0.2, 0.2]),
reg_class_agnostic=False,
loss_cls=dict(
type='CrossEntropyLoss',
use_sigmoid=False,
loss_weight=1.0),
loss_bbox=dict(type='L1Loss', loss_weight=1.0))),
train_cfg=dict(
rpn=dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.7,
neg_iou_thr=0.3,
min_pos_iou=0.3,
match_low_quality=True,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=256,
pos_fraction=0.5,
neg_pos_ub=-1,
add_gt_as_proposals=False),
allowed_border=-1,
pos_weight=-1,
debug=False),
rpn_proposal=dict(
nms_pre=2000,
max_per_img=1000,
nms=dict(type='nms', iou_threshold=0.7),
min_bbox_size=0),
rcnn=dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False)),
test_cfg=dict(
rpn=dict(
nms_pre=1000,
max_per_img=1000,
nms=dict(type='nms', iou_threshold=0.7),
min_bbox_size=0),
rcnn=dict(
score_thr=0.05,
nms=dict(type='nms', iou_threshold=0.5),
max_per_img=100))),
train_cfg=dict(
use_teacher_proposal=False,
pseudo_label_initial_score_thr=0.5,
rpn_pseudo_threshold=0.9,
cls_pseudo_threshold=0.9,
reg_pseudo_threshold=0.01,
jitter_times=10,
jitter_scale=0.06,
min_pseduo_box_size=0,
unsup_weight=4.0),
test_cfg=dict(inference_on='student'))
dataset_type = 'CocoDataset'
img_norm_cfg = dict(
mean=[103.53, 116.28, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True),
dict(
type='Sequential',
transforms=[
dict(
type='RandResize',
img_scale=[(1333, 400), (1333, 1200)],
multiscale_mode='range',
keep_ratio=True),
dict(type='RandFlip', flip_ratio=0.5),
dict(
type='OneOf',
transforms=[
dict(type='Identity'),
dict(type='AutoContrast'),
dict(type='RandEqualize'),
dict(type='RandSolarize'),
dict(type='RandColor'),
dict(type='RandContrast'),
dict(type='RandBrightness'),
dict(type='RandSharpness'),
dict(type='RandPosterize')
])
],
record=True),
dict(type='Pad', size_divisor=32),
dict(
type='Normalize',
mean=[103.53, 116.28, 123.675],
std=[1.0, 1.0, 1.0],
to_rgb=False),
dict(type='ExtraAttrs', tag='sup'),
dict(type='DefaultFormatBundle'),
dict(
type='Collect',
keys=['img', 'gt_bboxes', 'gt_labels'],
meta_keys=('filename', 'ori_shape', 'img_shape', 'img_norm_cfg',
'pad_shape', 'scale_factor', 'tag'))
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(1333, 800),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(
type='Normalize',
mean=[103.53, 116.28, 123.675],
std=[1.0, 1.0, 1.0],
to_rgb=False),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
])
]
data = dict(
samples_per_gpu=4,
workers_per_gpu=4,
train=dict(
type='SemiDataset',
sup=dict(
type='CocoDataset',
ann_file=
'/data6/ziqiwen/code/unbiased-teacher/datasets/coco/annotations/semi_supervised/instances_train2017.1@10.json',
img_prefix=
'/data6/ziqiwen/code/unbiased-teacher/datasets/coco/train2017/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True),
dict(
type='Sequential',
transforms=[
dict(
type='RandResize',
img_scale=[(1333, 400), (1333, 1200)],
multiscale_mode='range',
keep_ratio=True),
dict(type='RandFlip', flip_ratio=0.5),
dict(
type='OneOf',
transforms=[
dict(type='Identity'),
dict(type='AutoContrast'),
dict(type='RandEqualize'),
dict(type='RandSolarize'),
dict(type='RandColor'),
dict(type='RandContrast'),
dict(type='RandBrightness'),
dict(type='RandSharpness'),
dict(type='RandPosterize')
])
],
record=True),
dict(type='Pad', size_divisor=32),
dict(
type='Normalize',
mean=[103.53, 116.28, 123.675],
std=[1.0, 1.0, 1.0],
to_rgb=False),
dict(type='ExtraAttrs', tag='sup'),
dict(type='DefaultFormatBundle'),
dict(
type='Collect',
keys=['img', 'gt_bboxes', 'gt_labels'],
meta_keys=('filename', 'ori_shape', 'img_shape',
'img_norm_cfg', 'pad_shape', 'scale_factor',
'tag'))
]),
unsup=dict(
type='CocoDataset',
ann_file=
'/data6/ziqiwen/code/unbiased-teacher/datasets/coco/annotations/semi_supervised/instances_train2017.1@10-unlabeled.json',
img_prefix=
'/data6/ziqiwen/code/unbiased-teacher/datasets/coco/train2017/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='PseudoSamples', with_bbox=True),
dict(
type='MultiBranch',
unsup_teacher=[
dict(
type='Sequential',
transforms=[
dict(
type='RandResize',
img_scale=[(1333, 400), (1333, 1200)],
multiscale_mode='range',
keep_ratio=True),
dict(type='RandFlip', flip_ratio=0.5),
dict(
type='ShuffledSequential',
transforms=[
dict(
type='OneOf',
transforms=[
dict(type='Identity'),
dict(type='AutoContrast'),
dict(type='RandEqualize'),
dict(type='RandSolarize'),
dict(type='RandColor'),
dict(type='RandContrast'),
dict(type='RandBrightness'),
dict(type='RandSharpness'),
dict(type='RandPosterize')
]),
dict(
type='OneOf',
transforms=[{
'type': 'RandTranslate',
'x': (-0.1, 0.1)
}, {
'type': 'RandTranslate',
'y': (-0.1, 0.1)
}, {
'type': 'RandRotate',
'angle': (-30, 30)
},
[{
'type':
'RandShear',
'x': (-30, 30)
}, {
'type':
'RandShear',
'y': (-30, 30)
}]])
]),
dict(
type='RandErase',
n_iterations=(1, 5),
size=[0, 0.2],
squared=True)
],
record=True),
dict(type='Pad', size_divisor=32),
dict(
type='Normalize',
mean=[103.53, 116.28, 123.675],
std=[1.0, 1.0, 1.0],
to_rgb=False),
dict(type='ExtraAttrs', tag='unsup_student'),
dict(type='DefaultFormatBundle'),
dict(
type='Collect',
keys=['img', 'gt_bboxes', 'gt_labels'],
meta_keys=('filename', 'ori_shape', 'img_shape',
'img_norm_cfg', 'pad_shape',
'scale_factor', 'tag',
'transform_matrix'))
],
unsup_student=[
dict(
type='Sequential',
transforms=[
dict(
type='RandResize',
img_scale=[(1333, 400), (1333, 1200)],
multiscale_mode='range',
keep_ratio=True),
dict(type='RandFlip', flip_ratio=0.5)
],
record=True),
dict(type='Pad', size_divisor=32),
dict(
type='Normalize',
mean=[103.53, 116.28, 123.675],
std=[1.0, 1.0, 1.0],
to_rgb=False),
dict(type='ExtraAttrs', tag='unsup_teacher'),
dict(type='DefaultFormatBundle'),
dict(
type='Collect',
keys=['img', 'gt_bboxes', 'gt_labels'],
meta_keys=('filename', 'ori_shape', 'img_shape',
'img_norm_cfg', 'pad_shape',
'scale_factor', 'tag',
'transform_matrix'))
])
],
filter_empty_gt=False)),
val=dict(
type='CocoDataset',
ann_file=
'/data6/ziqiwen/code/unbiased-teacher/datasets/coco/annotations/semi_supervised/instances_train2017.1@10.json',
img_prefix=
'/data6/ziqiwen/code/unbiased-teacher/datasets/coco/train2017/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(1333, 800),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(
type='Normalize',
mean=[103.53, 116.28, 123.675],
std=[1.0, 1.0, 1.0],
to_rgb=False),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
])
]),
test=dict(
type='CocoDataset',
ann_file=
'/data6/ziqiwen/code/unbiased-teacher/datasets/coco/annotations/semi_supervised/instances_train2017.1@10.json',
img_prefix=
'/data6/ziqiwen/code/unbiased-teacher/datasets/coco/train2017/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(1333, 800),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(
type='Normalize',
mean=[103.53, 116.28, 123.675],
std=[1.0, 1.0, 1.0],
to_rgb=False),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
])
]),
sampler=dict(
train=dict(
type='SemiBalanceSampler',
sample_ratio=[1, 4],
by_prob=True,
epoch_length=7330)))
evaluation = dict(interval=4000, metric='bbox', type='SubModulesDistEvalHook')
optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=None)
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=500,
warmup_ratio=0.001,
step=[120000, 160000])
runner = dict(type='IterBasedRunner', max_iters=180000)
checkpoint_config = dict(interval=4000, by_epoch=False, max_keep_ckpts=20)
log_config = dict(interval=50, hooks=[dict(type='TextLoggerHook')])
custom_hooks = [
dict(type='NumClassCheckHook'),
dict(type='WeightSummary'),
dict(type='MeanTeacher', momentum=0.999, interval=1, warm_up=0)
]
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = None
resume_from = None
workflow = [('train', 1)]
mmdet_base = '../base'
strong_pipeline = [
dict(
type='Sequential',
transforms=[
dict(
type='RandResize',
img_scale=[(1333, 400), (1333, 1200)],
multiscale_mode='range',
keep_ratio=True),
dict(type='RandFlip', flip_ratio=0.5),
dict(
type='ShuffledSequential',
transforms=[
dict(
type='OneOf',
transforms=[
dict(type='Identity'),
dict(type='AutoContrast'),
dict(type='RandEqualize'),
dict(type='RandSolarize'),
dict(type='RandColor'),
dict(type='RandContrast'),
dict(type='RandBrightness'),
dict(type='RandSharpness'),
dict(type='RandPosterize')
]),
dict(
type='OneOf',
transforms=[{
'type': 'RandTranslate',
'x': (-0.1, 0.1)
}, {
'type': 'RandTranslate',
'y': (-0.1, 0.1)
}, {
'type': 'RandRotate',
'angle': (-30, 30)
},
[{
'type': 'RandShear',
'x': (-30, 30)
}, {
'type': 'RandShear',
'y': (-30, 30)
}]])
]),
dict(
type='RandErase',
n_iterations=(1, 5),
size=[0, 0.2],
squared=True)
],
record=True),
dict(type='Pad', size_divisor=32),
dict(
type='Normalize',
mean=[103.53, 116.28, 123.675],
std=[1.0, 1.0, 1.0],
to_rgb=False),
dict(type='ExtraAttrs', tag='unsup_student'),
dict(type='DefaultFormatBundle'),
dict(
type='Collect',
keys=['img', 'gt_bboxes', 'gt_labels'],
meta_keys=('filename', 'ori_shape', 'img_shape', 'img_norm_cfg',
'pad_shape', 'scale_factor', 'tag', 'transform_matrix'))
]
weak_pipeline = [
dict(
type='Sequential',
transforms=[
dict(
type='RandResize',
img_scale=[(1333, 400), (1333, 1200)],
multiscale_mode='range',
keep_ratio=True),
dict(type='RandFlip', flip_ratio=0.5)
],
record=True),
dict(type='Pad', size_divisor=32),
dict(
type='Normalize',
mean=[103.53, 116.28, 123.675],
std=[1.0, 1.0, 1.0],
to_rgb=False),
dict(type='ExtraAttrs', tag='unsup_teacher'),
dict(type='DefaultFormatBundle'),
dict(
type='Collect',
keys=['img', 'gt_bboxes', 'gt_labels'],
meta_keys=('filename', 'ori_shape', 'img_shape', 'img_norm_cfg',
'pad_shape', 'scale_factor', 'tag', 'transform_matrix'))
]
unsup_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='PseudoSamples', with_bbox=True),
dict(
type='MultiBranch',
unsup_teacher=[
dict(
type='Sequential',
transforms=[
dict(
type='RandResize',
img_scale=[(1333, 400), (1333, 1200)],
multiscale_mode='range',
keep_ratio=True),
dict(type='RandFlip', flip_ratio=0.5),
dict(
type='ShuffledSequential',
transforms=[
dict(
type='OneOf',
transforms=[
dict(type='Identity'),
dict(type='AutoContrast'),
dict(type='RandEqualize'),
dict(type='RandSolarize'),
dict(type='RandColor'),
dict(type='RandContrast'),
dict(type='RandBrightness'),
dict(type='RandSharpness'),
dict(type='RandPosterize')
]),
dict(
type='OneOf',
transforms=[{
'type': 'RandTranslate',
'x': (-0.1, 0.1)
}, {
'type': 'RandTranslate',
'y': (-0.1, 0.1)
}, {
'type': 'RandRotate',
'angle': (-30, 30)
},
[{
'type': 'RandShear',
'x': (-30, 30)
}, {
'type': 'RandShear',
'y': (-30, 30)
}]])
]),
dict(
type='RandErase',
n_iterations=(1, 5),
size=[0, 0.2],
squared=True)
],
record=True),
dict(type='Pad', size_divisor=32),
dict(
type='Normalize',
mean=[103.53, 116.28, 123.675],
std=[1.0, 1.0, 1.0],
to_rgb=False),
dict(type='ExtraAttrs', tag='unsup_student'),
dict(type='DefaultFormatBundle'),
dict(
type='Collect',
keys=['img', 'gt_bboxes', 'gt_labels'],
meta_keys=('filename', 'ori_shape', 'img_shape',
'img_norm_cfg', 'pad_shape', 'scale_factor', 'tag',
'transform_matrix'))
],
unsup_student=[
dict(
type='Sequential',
transforms=[
dict(
type='RandResize',
img_scale=[(1333, 400), (1333, 1200)],
multiscale_mode='range',
keep_ratio=True),
dict(type='RandFlip', flip_ratio=0.5)
],
record=True),
dict(type='Pad', size_divisor=32),
dict(
type='Normalize',
mean=[103.53, 116.28, 123.675],
std=[1.0, 1.0, 1.0],
to_rgb=False),
dict(type='ExtraAttrs', tag='unsup_teacher'),
dict(type='DefaultFormatBundle'),
dict(
type='Collect',
keys=['img', 'gt_bboxes', 'gt_labels'],
meta_keys=('filename', 'ori_shape', 'img_shape',
'img_norm_cfg', 'pad_shape', 'scale_factor', 'tag',
'transform_matrix'))
])
]
fp16 = dict(loss_scale='dynamic')
fold = 1
percent = 10
work_dir = 'work_dirs/soft_teacher_faster_rcnn_r50_caffe_fpn_coco_180k/10/1'
cfg_name = 'soft_teacher_faster_rcnn_r50_caffe_fpn_coco_180k'
gpu_ids = range(0, 1)

/home/ziqiwen/code/mmdetection/mmdet/core/anchor/builder.py:17: UserWarning: build_anchor_generator would be deprecated soon, please use build_prior_generator
'build_anchor_generator would be deprecated soon, please use '
2021-10-18 13:36:54,143 - mmdet.ssod - INFO - initialize ResNet with init_cfg {'type': 'Pretrained', 'checkpoint': 'open-mmlab://detectron2/resnet50_caffe'}
2021-10-18 13:36:54,144 - mmcv - INFO - load model from: open-mmlab://detectron2/resnet50_caffe
2021-10-18 13:36:54,144 - mmcv - INFO - Use load_from_openmmlab loader
2021-10-18 13:36:54,265 - mmcv - WARNING - The model and loaded state dict do not match exactly

unexpected key in source state_dict: conv1.bias

2021-10-18 13:36:54,295 - mmdet.ssod - INFO - initialize FPN with init_cfg {'type': 'Xavier', 'layer': 'Conv2d', 'distribution': 'uniform'}
2021-10-18 13:36:54,328 - mmdet.ssod - INFO - initialize RPNHead with init_cfg {'type': 'Normal', 'layer': 'Conv2d', 'std': 0.01}
2021-10-18 13:36:54,337 - mmdet.ssod - INFO - initialize Shared2FCBBoxHead with init_cfg [{'type': 'Normal', 'std': 0.01, 'override': {'name': 'fc_cls'}}, {'type': 'Normal', 'std': 0.001, 'override': {'name': 'fc_reg'}}, {'type': 'Xavier', 'layer': 'Linear', 'override': [{'name': 'shared_fcs'}, {'name': 'cls_fcs'}, {'name': 'reg_fcs'}]}]
2021-10-18 13:36:54,773 - mmdet.ssod - INFO - initialize ResNet with init_cfg {'type': 'Pretrained', 'checkpoint': 'open-mmlab://detectron2/resnet50_caffe'}
2021-10-18 13:36:54,774 - mmcv - INFO - load model from: open-mmlab://detectron2/resnet50_caffe
2021-10-18 13:36:54,774 - mmcv - INFO - Use load_from_openmmlab loader
2021-10-18 13:36:54,883 - mmcv - WARNING - The model and loaded state dict do not match exactly

unexpected key in source state_dict: conv1.bias

2021-10-18 13:36:54,912 - mmdet.ssod - INFO - initialize FPN with init_cfg {'type': 'Xavier', 'layer': 'Conv2d', 'distribution': 'uniform'}
2021-10-18 13:36:54,943 - mmdet.ssod - INFO - initialize RPNHead with init_cfg {'type': 'Normal', 'layer': 'Conv2d', 'std': 0.01}
2021-10-18 13:36:54,953 - mmdet.ssod - INFO - initialize Shared2FCBBoxHead with init_cfg [{'type': 'Normal', 'std': 0.01, 'override': {'name': 'fc_cls'}}, {'type': 'Normal', 'std': 0.001, 'override': {'name': 'fc_reg'}}, {'type': 'Xavier', 'layer': 'Linear', 'override': [{'name': 'shared_fcs'}, {'name': 'cls_fcs'}, {'name': 'reg_fcs'}]}]
loading annotations into memory...
Done (t=1.48s)
creating index...
index created!
loading annotations into memory...
Done (t=14.08s)
creating index...
index created!
fatal: Not a git repository (or any parent up to mount point /data6)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
loading annotations into memory...
Done (t=1.17s)
creating index...
index created!
2021-10-18 13:37:18,183 - mmdet.ssod - INFO - Start running, host: ziqiwen@ISIP-IW4200-4G-3, work_dir: /data6/ziqiwen/code/softteacher/work_dirs/soft_teacher_faster_rcnn_r50_caffe_fpn_coco_180k/10/1
2021-10-18 13:37:18,184 - mmdet.ssod - INFO - Hooks will be executed in the following order:
before_run:
(VERY_HIGH ) StepLrUpdaterHook
(ABOVE_NORMAL) Fp16OptimizerHook
(NORMAL ) CheckpointHook
(NORMAL ) WeightSummary
(NORMAL ) MeanTeacher
(80 ) SubModulesDistEvalHook
(VERY_LOW ) TextLoggerHook

before_train_epoch:
(VERY_HIGH ) StepLrUpdaterHook
(NORMAL ) NumClassCheckHook
(LOW ) IterTimerHook
(80 ) SubModulesDistEvalHook
(VERY_LOW ) TextLoggerHook

before_train_iter:
(VERY_HIGH ) StepLrUpdaterHook
(NORMAL ) MeanTeacher
(LOW ) IterTimerHook
(80 ) SubModulesDistEvalHook

after_train_iter:
(ABOVE_NORMAL) Fp16OptimizerHook
(NORMAL ) CheckpointHook
(NORMAL ) MeanTeacher
(LOW ) IterTimerHook
(80 ) SubModulesDistEvalHook
(VERY_LOW ) TextLoggerHook

after_train_epoch:
(NORMAL ) CheckpointHook
(80 ) SubModulesDistEvalHook
(VERY_LOW ) TextLoggerHook

before_val_epoch:
(NORMAL ) NumClassCheckHook
(LOW ) IterTimerHook
(VERY_LOW ) TextLoggerHook

before_val_iter:
(LOW ) IterTimerHook

after_val_iter:
(LOW ) IterTimerHook

after_val_epoch:
(VERY_LOW ) TextLoggerHook

2021-10-18 13:37:18,184 - mmdet.ssod - INFO - workflow: [('train', 1)], max: 180000 iters
2021-10-18 13:37:18,270 - mmdet.ssod - INFO -
+--------------------------------------------------------------------------------------------------------------------+
| Model Information |
+------------------------------------------------+-----------+---------------+-----------------------+------+--------+
| Name | Optimized | Shape | Value Scale [Min,Max] | Lr | Wd |
+------------------------------------------------+-----------+---------------+-----------------------+------+--------+
| teacher.backbone.conv1.weight | N | 64X3X7X7 | Min:-0.671 Max:0.704 | 0.01 | 0.0001 |
| teacher.backbone.bn1.weight | N | 64 | Min:0.513 Max:2.669 | 0.01 | 0.0001 |
| teacher.backbone.bn1.bias | N | 64 | Min:-2.654 Max:6.354 | 0.01 | 0.0001 |
| teacher.backbone.layer1.0.conv1.weight | N | 64X64X1X1 | Min:-0.717 Max:0.392 | 0.01 | 0.0001 |
| teacher.backbone.layer1.0.bn1.weight | N | 64 | Min:0.509 Max:2.066 | 0.01 | 0.0001 |
| teacher.backbone.layer1.0.bn1.bias | N | 64 | Min:-2.411 Max:3.608 | 0.01 | 0.0001 |
| teacher.backbone.layer1.0.conv2.weight | N | 64X64X3X3 | Min:-0.390 Max:0.364 | 0.01 | 0.0001 |
| teacher.backbone.layer1.0.bn2.weight | N | 64 | Min:0.420 Max:2.530 | 0.01 | 0.0001 |
| teacher.backbone.layer1.0.bn2.bias | N | 64 | Min:-2.286 Max:5.913 | 0.01 | 0.0001 |
| teacher.backbone.layer1.0.conv3.weight | N | 256X64X1X1 | Min:-0.397 Max:0.348 | 0.01 | 0.0001 |
| teacher.backbone.layer1.0.bn3.weight | N | 256 | Min:0.011 Max:2.820 | 0.01 | 0.0001 |
| teacher.backbone.layer1.0.bn3.bias | N | 256 | Min:-1.126 Max:1.522 | 0.01 | 0.0001 |
| teacher.backbone.layer1.0.downsample.0.weight | N | 256X64X1X1 | Min:-0.772 Max:0.900 | 0.01 | 0.0001 |
| teacher.backbone.layer1.0.downsample.1.weight | N | 256 | Min:0.004 Max:3.064 | 0.01 | 0.0001 |
| teacher.backbone.layer1.0.downsample.1.bias | N | 256 | Min:-1.126 Max:1.522 | 0.01 | 0.0001 |
| teacher.backbone.layer1.1.conv1.weight | N | 64X256X1X1 | Min:-0.297 Max:0.220 | 0.01 | 0.0001 |
| teacher.backbone.layer1.1.bn1.weight | N | 64 | Min:0.746 Max:1.949 | 0.01 | 0.0001 |
| teacher.backbone.layer1.1.bn1.bias | N | 64 | Min:-1.688 Max:1.578 | 0.01 | 0.0001 |
| teacher.backbone.layer1.1.conv2.weight | N | 64X64X3X3 | Min:-0.240 Max:0.318 | 0.01 | 0.0001 |
| teacher.backbone.layer1.1.bn2.weight | N | 64 | Min:0.621 Max:1.618 | 0.01 | 0.0001 |
| teacher.backbone.layer1.1.bn2.bias | N | 64 | Min:-2.003 Max:2.398 | 0.01 | 0.0001 |
| teacher.backbone.layer1.1.conv3.weight | N | 256X64X1X1 | Min:-0.240 Max:0.280 | 0.01 | 0.0001 |
| teacher.backbone.layer1.1.bn3.weight | N | 256 | Min:-0.017 Max:2.130 | 0.01 | 0.0001 |
| teacher.backbone.layer1.1.bn3.bias | N | 256 | Min:-1.711 Max:1.291 | 0.01 | 0.0001 |
| teacher.backbone.layer1.2.conv1.weight | N | 64X256X1X1 | Min:-0.210 Max:0.264 | 0.01 | 0.0001 |
| teacher.backbone.layer1.2.bn1.weight | N | 64 | Min:0.574 Max:1.688 | 0.01 | 0.0001 |
| teacher.backbone.layer1.2.bn1.bias | N | 64 | Min:-1.876 Max:1.090 | 0.01 | 0.0001 |
| teacher.backbone.layer1.2.conv2.weight | N | 64X64X3X3 | Min:-0.218 Max:0.201 | 0.01 | 0.0001 |
| teacher.backbone.layer1.2.bn2.weight | N | 64 | Min:0.757 Max:1.649 | 0.01 | 0.0001 |
| teacher.backbone.layer1.2.bn2.bias | N | 64 | Min:-2.221 Max:1.878 | 0.01 | 0.0001 |
| teacher.backbone.layer1.2.conv3.weight | N | 256X64X1X1 | Min:-0.275 Max:0.350 | 0.01 | 0.0001 |
| teacher.backbone.layer1.2.bn3.weight | N | 256 | Min:-0.058 Max:2.154 | 0.01 | 0.0001 |
| teacher.backbone.layer1.2.bn3.bias | N | 256 | Min:-1.570 Max:1.535 | 0.01 | 0.0001 |
| teacher.backbone.layer2.0.conv1.weight | N | 128X256X1X1 | Min:-0.334 Max:0.300 | 0.01 | 0.0001 |
| teacher.backbone.layer2.0.bn1.weight | N | 128 | Min:0.610 Max:1.642 | 0.01 | 0.0001 |
| teacher.backbone.layer2.0.bn1.bias | N | 128 | Min:-1.579 Max:1.449 | 0.01 | 0.0001 |
| teacher.backbone.layer2.0.conv2.weight | N | 128X128X3X3 | Min:-0.384 Max:0.377 | 0.01 | 0.0001 |
| teacher.backbone.layer2.0.bn2.weight | N | 128 | Min:0.605 Max:1.622 | 0.01 | 0.0001 |
| teacher.backbone.layer2.0.bn2.bias | N | 128 | Min:-2.768 Max:1.747 | 0.01 | 0.0001 |
| teacher.backbone.layer2.0.conv3.weight | N | 512X128X1X1 | Min:-0.374 Max:0.434 | 0.01 | 0.0001 |
| teacher.backbone.layer2.0.bn3.weight | N | 512 | Min:-0.007 Max:2.730 | 0.01 | 0.0001 |
| teacher.backbone.layer2.0.bn3.bias | N | 512 | Min:-1.545 Max:1.256 | 0.01 | 0.0001 |
| teacher.backbone.layer2.0.downsample.0.weight | N | 512X256X1X1 | Min:-0.466 Max:0.642 | 0.01 | 0.0001 |
| teacher.backbone.layer2.0.downsample.1.weight | N | 512 | Min:0.006 Max:2.552 | 0.01 | 0.0001 |
| teacher.backbone.layer2.0.downsample.1.bias | N | 512 | Min:-1.545 Max:1.256 | 0.01 | 0.0001 |
| teacher.backbone.layer2.1.conv1.weight | N | 128X512X1X1 | Min:-0.162 Max:0.195 | 0.01 | 0.0001 |
| teacher.backbone.layer2.1.bn1.weight | N | 128 | Min:0.578 Max:1.429 | 0.01 | 0.0001 |
| teacher.backbone.layer2.1.bn1.bias | N | 128 | Min:-4.348 Max:0.588 | 0.01 | 0.0001 |
| teacher.backbone.layer2.1.conv2.weight | N | 128X128X3X3 | Min:-0.176 Max:0.177 | 0.01 | 0.0001 |
| teacher.backbone.layer2.1.bn2.weight | N | 128 | Min:0.511 Max:1.794 | 0.01 | 0.0001 |
| teacher.backbone.layer2.1.bn2.bias | N | 128 | Min:-3.825 Max:1.343 | 0.01 | 0.0001 |
| teacher.backbone.layer2.1.conv3.weight | N | 512X128X1X1 | Min:-0.344 Max:0.336 | 0.01 | 0.0001 |
| teacher.backbone.layer2.1.bn3.weight | N | 512 | Min:-0.072 Max:2.122 | 0.01 | 0.0001 |
| teacher.backbone.layer2.1.bn3.bias | N | 512 | Min:-1.502 Max:1.166 | 0.01 | 0.0001 |
| teacher.backbone.layer2.2.conv1.weight | N | 128X512X1X1 | Min:-0.330 Max:0.369 | 0.01 | 0.0001 |
| teacher.backbone.layer2.2.bn1.weight | N | 128 | Min:0.406 Max:1.696 | 0.01 | 0.0001 |
| teacher.backbone.layer2.2.bn1.bias | N | 128 | Min:-2.696 Max:1.944 | 0.01 | 0.0001 |
| teacher.backbone.layer2.2.conv2.weight | N | 128X128X3X3 | Min:-0.326 Max:0.374 | 0.01 | 0.0001 |
| teacher.backbone.layer2.2.bn2.weight | N | 128 | Min:0.460 Max:2.179 | 0.01 | 0.0001 |
| teacher.backbone.layer2.2.bn2.bias | N | 128 | Min:-1.587 Max:0.589 | 0.01 | 0.0001 |
| teacher.backbone.layer2.2.conv3.weight | N | 512X128X1X1 | Min:-0.288 Max:0.232 | 0.01 | 0.0001 |
| teacher.backbone.layer2.2.bn3.weight | N | 512 | Min:-0.006 Max:3.043 | 0.01 | 0.0001 |
| teacher.backbone.layer2.2.bn3.bias | N | 512 | Min:-2.369 Max:0.440 | 0.01 | 0.0001 |
| teacher.backbone.layer2.3.conv1.weight | N | 128X512X1X1 | Min:-0.298 Max:0.346 | 0.01 | 0.0001 |
| teacher.backbone.layer2.3.bn1.weight | N | 128 | Min:0.736 Max:2.394 | 0.01 | 0.0001 |
| teacher.backbone.layer2.3.bn1.bias | N | 128 | Min:-2.643 Max:0.756 | 0.01 | 0.0001 |
| teacher.backbone.layer2.3.conv2.weight | N | 128X128X3X3 | Min:-0.272 Max:0.208 | 0.01 | 0.0001 |
| teacher.backbone.layer2.3.bn2.weight | N | 128 | Min:0.682 Max:1.694 | 0.01 | 0.0001 |
| teacher.backbone.layer2.3.bn2.bias | N | 128 | Min:-1.365 Max:1.599 | 0.01 | 0.0001 |
| teacher.backbone.layer2.3.conv3.weight | N | 512X128X1X1 | Min:-0.279 Max:0.281 | 0.01 | 0.0001 |
| teacher.backbone.layer2.3.bn3.weight | N | 512 | Min:-0.009 Max:1.721 | 0.01 | 0.0001 |
| teacher.backbone.layer2.3.bn3.bias | N | 512 | Min:-1.897 Max:1.182 | 0.01 | 0.0001 |
| teacher.backbone.layer3.0.conv1.weight | N | 256X512X1X1 | Min:-0.230 Max:0.341 | 0.01 | 0.0001 |
| teacher.backbone.layer3.0.bn1.weight | N | 256 | Min:0.621 Max:1.636 | 0.01 | 0.0001 |
| teacher.backbone.layer3.0.bn1.bias | N | 256 | Min:-1.420 Max:0.917 | 0.01 | 0.0001 |
| teacher.backbone.layer3.0.conv2.weight | N | 256X256X3X3 | Min:-0.267 Max:0.179 | 0.01 | 0.0001 |
| teacher.backbone.layer3.0.bn2.weight | N | 256 | Min:0.585 Max:1.749 | 0.01 | 0.0001 |
| teacher.backbone.layer3.0.bn2.bias | N | 256 | Min:-1.837 Max:1.398 | 0.01 | 0.0001 |
| teacher.backbone.layer3.0.conv3.weight | N | 1024X256X1X1 | Min:-0.333 Max:0.384 | 0.01 | 0.0001 |
| teacher.backbone.layer3.0.bn3.weight | N | 1024 | Min:0.071 Max:2.367 | 0.01 | 0.0001 |
| teacher.backbone.layer3.0.bn3.bias | N | 1024 | Min:-0.938 Max:0.887 | 0.01 | 0.0001 |
| teacher.backbone.layer3.0.downsample.0.weight | N | 1024X512X1X1 | Min:-0.333 Max:0.421 | 0.01 | 0.0001 |
| teacher.backbone.layer3.0.downsample.1.weight | N | 1024 | Min:0.034 Max:2.779 | 0.01 | 0.0001 |
| teacher.backbone.layer3.0.downsample.1.bias | N | 1024 | Min:-0.938 Max:0.887 | 0.01 | 0.0001 |
| teacher.backbone.layer3.1.conv1.weight | N | 256X1024X1X1 | Min:-0.197 Max:0.236 | 0.01 | 0.0001 |
| teacher.backbone.layer3.1.bn1.weight | N | 256 | Min:0.566 Max:1.743 | 0.01 | 0.0001 |
| teacher.backbone.layer3.1.bn1.bias | N | 256 | Min:-2.703 Max:1.042 | 0.01 | 0.0001 |
| teacher.backbone.layer3.1.conv2.weight | N | 256X256X3X3 | Min:-0.436 Max:0.196 | 0.01 | 0.0001 |
| teacher.backbone.layer3.1.bn2.weight | N | 256 | Min:0.515 Max:2.301 | 0.01 | 0.0001 |
| teacher.backbone.layer3.1.bn2.bias | N | 256 | Min:-2.548 Max:1.856 | 0.01 | 0.0001 |
| teacher.backbone.layer3.1.conv3.weight | N | 1024X256X1X1 | Min:-0.438 Max:0.295 | 0.01 | 0.0001 |
| teacher.backbone.layer3.1.bn3.weight | N | 1024 | Min:0.055 Max:1.943 | 0.01 | 0.0001 |
| teacher.backbone.layer3.1.bn3.bias | N | 1024 | Min:-1.647 Max:1.016 | 0.01 | 0.0001 |
| teacher.backbone.layer3.2.conv1.weight | N | 256X1024X1X1 | Min:-0.387 Max:0.337 | 0.01 | 0.0001 |
| teacher.backbone.layer3.2.bn1.weight | N | 256 | Min:0.463 Max:1.886 | 0.01 | 0.0001 |
| teacher.backbone.layer3.2.bn1.bias | N | 256 | Min:-2.399 Max:0.488 | 0.01 | 0.0001 |
| teacher.backbone.layer3.2.conv2.weight | N | 256X256X3X3 | Min:-0.165 Max:0.258 | 0.01 | 0.0001 |
| teacher.backbone.layer3.2.bn2.weight | N | 256 | Min:0.555 Max:1.901 | 0.01 | 0.0001 |
| teacher.backbone.layer3.2.bn2.bias | N | 256 | Min:-1.655 Max:0.704 | 0.01 | 0.0001 |
| teacher.backbone.layer3.2.conv3.weight | N | 1024X256X1X1 | Min:-0.290 Max:0.261 | 0.01 | 0.0001 |
| teacher.backbone.layer3.2.bn3.weight | N | 1024 | Min:0.049 Max:1.450 | 0.01 | 0.0001 |
| teacher.backbone.layer3.2.bn3.bias | N | 1024 | Min:-1.201 Max:0.587 | 0.01 | 0.0001 |
| teacher.backbone.layer3.3.conv1.weight | N | 256X1024X1X1 | Min:-0.194 Max:0.295 | 0.01 | 0.0001 |
| teacher.backbone.layer3.3.bn1.weight | N | 256 | Min:0.442 Max:1.353 | 0.01 | 0.0001 |
| teacher.backbone.layer3.3.bn1.bias | N | 256 | Min:-2.322 Max:0.509 | 0.01 | 0.0001 |
| teacher.backbone.layer3.3.conv2.weight | N | 256X256X3X3 | Min:-0.201 Max:0.176 | 0.01 | 0.0001 |
| teacher.backbone.layer3.3.bn2.weight | N | 256 | Min:0.529 Max:1.939 | 0.01 | 0.0001 |
| teacher.backbone.layer3.3.bn2.bias | N | 256 | Min:-1.610 Max:0.776 | 0.01 | 0.0001 |
| teacher.backbone.layer3.3.conv3.weight | N | 1024X256X1X1 | Min:-0.205 Max:0.239 | 0.01 | 0.0001 |
| teacher.backbone.layer3.3.bn3.weight | N | 1024 | Min:-0.037 Max:1.646 | 0.01 | 0.0001 |
| teacher.backbone.layer3.3.bn3.bias | N | 1024 | Min:-1.484 Max:0.344 | 0.01 | 0.0001 |
| teacher.backbone.layer3.4.conv1.weight | N | 256X1024X1X1 | Min:-0.226 Max:0.306 | 0.01 | 0.0001 |
| teacher.backbone.layer3.4.bn1.weight | N | 256 | Min:0.438 Max:1.446 | 0.01 | 0.0001 |
| teacher.backbone.layer3.4.bn1.bias | N | 256 | Min:-2.511 Max:0.557 | 0.01 | 0.0001 |
| teacher.backbone.layer3.4.conv2.weight | N | 256X256X3X3 | Min:-0.147 Max:0.223 | 0.01 | 0.0001 |
| teacher.backbone.layer3.4.bn2.weight | N | 256 | Min:0.651 Max:1.858 | 0.01 | 0.0001 |
| teacher.backbone.layer3.4.bn2.bias | N | 256 | Min:-1.588 Max:0.661 | 0.01 | 0.0001 |
| teacher.backbone.layer3.4.conv3.weight | N | 1024X256X1X1 | Min:-0.178 Max:0.265 | 0.01 | 0.0001 |
| teacher.backbone.layer3.4.bn3.weight | N | 1024 | Min:-0.001 Max:1.501 | 0.01 | 0.0001 |
| teacher.backbone.layer3.4.bn3.bias | N | 1024 | Min:-1.108 Max:0.639 | 0.01 | 0.0001 |
| teacher.backbone.layer3.5.conv1.weight | N | 256X1024X1X1 | Min:-0.153 Max:0.330 | 0.01 | 0.0001 |
| teacher.backbone.layer3.5.bn1.weight | N | 256 | Min:0.425 Max:1.547 | 0.01 | 0.0001 |
| teacher.backbone.layer3.5.bn1.bias | N | 256 | Min:-1.972 Max:0.823 | 0.01 | 0.0001 |
| teacher.backbone.layer3.5.conv2.weight | N | 256X256X3X3 | Min:-0.293 Max:0.276 | 0.01 | 0.0001 |
| teacher.backbone.layer3.5.bn2.weight | N | 256 | Min:0.650 Max:2.942 | 0.01 | 0.0001 |
| teacher.backbone.layer3.5.bn2.bias | N | 256 | Min:-1.093 Max:0.771 | 0.01 | 0.0001 |
| teacher.backbone.layer3.5.conv3.weight | N | 1024X256X1X1 | Min:-0.232 Max:0.294 | 0.01 | 0.0001 |
| teacher.backbone.layer3.5.bn3.weight | N | 1024 | Min:0.004 Max:1.984 | 0.01 | 0.0001 |
| teacher.backbone.layer3.5.bn3.bias | N | 1024 | Min:-1.636 Max:1.250 | 0.01 | 0.0001 |
| teacher.backbone.layer4.0.conv1.weight | N | 512X1024X1X1 | Min:-0.184 Max:0.331 | 0.01 | 0.0001 |
| teacher.backbone.layer4.0.bn1.weight | N | 512 | Min:0.535 Max:1.594 | 0.01 | 0.0001 |
| teacher.backbone.layer4.0.bn1.bias | N | 512 | Min:-1.756 Max:0.288 | 0.01 | 0.0001 |
| teacher.backbone.layer4.0.conv2.weight | N | 512X512X3X3 | Min:-0.175 Max:0.272 | 0.01 | 0.0001 |
| teacher.backbone.layer4.0.bn2.weight | N | 512 | Min:0.456 Max:1.542 | 0.01 | 0.0001 |
| teacher.backbone.layer4.0.bn2.bias | N | 512 | Min:-1.820 Max:0.839 | 0.01 | 0.0001 |
| teacher.backbone.layer4.0.conv3.weight | N | 2048X512X1X1 | Min:-0.332 Max:0.432 | 0.01 | 0.0001 |
| teacher.backbone.layer4.0.bn3.weight | N | 2048 | Min:0.888 Max:3.492 | 0.01 | 0.0001 |
| teacher.backbone.layer4.0.bn3.bias | N | 2048 | Min:-1.810 Max:0.980 | 0.01 | 0.0001 |
| teacher.backbone.layer4.0.downsample.0.weight | N | 2048X1024X1X1 | Min:-0.622 Max:0.465 | 0.01 | 0.0001 |
| teacher.backbone.layer4.0.downsample.1.weight | N | 2048 | Min:0.261 Max:4.575 | 0.01 | 0.0001 |
| teacher.backbone.layer4.0.downsample.1.bias | N | 2048 | Min:-1.810 Max:0.980 | 0.01 | 0.0001 |
| teacher.backbone.layer4.1.conv1.weight | N | 512X2048X1X1 | Min:-0.316 Max:0.577 | 0.01 | 0.0001 |
| teacher.backbone.layer4.1.bn1.weight | N | 512 | Min:0.398 Max:1.429 | 0.01 | 0.0001 |
| teacher.backbone.layer4.1.bn1.bias | N | 512 | Min:-1.380 Max:0.428 | 0.01 | 0.0001 |
| teacher.backbone.layer4.1.conv2.weight | N | 512X512X3X3 | Min:-0.217 Max:0.284 | 0.01 | 0.0001 |
| teacher.backbone.layer4.1.bn2.weight | N | 512 | Min:0.349 Max:1.550 | 0.01 | 0.0001 |
| teacher.backbone.layer4.1.bn2.bias | N | 512 | Min:-1.867 Max:0.880 | 0.01 | 0.0001 |
| teacher.backbone.layer4.1.conv3.weight | N | 2048X512X1X1 | Min:-0.200 Max:0.277 | 0.01 | 0.0001 |
| teacher.backbone.layer4.1.bn3.weight | N | 2048 | Min:0.574 Max:2.847 | 0.01 | 0.0001 |
| teacher.backbone.layer4.1.bn3.bias | N | 2048 | Min:-2.638 Max:0.544 | 0.01 | 0.0001 |
| teacher.backbone.layer4.2.conv1.weight | N | 512X2048X1X1 | Min:-0.289 Max:0.514 | 0.01 | 0.0001 |
| teacher.backbone.layer4.2.bn1.weight | N | 512 | Min:0.366 Max:1.249 | 0.01 | 0.0001 |
| teacher.backbone.layer4.2.bn1.bias | N | 512 | Min:-1.664 Max:0.753 | 0.01 | 0.0001 |
| teacher.backbone.layer4.2.conv2.weight | N | 512X512X3X3 | Min:-0.142 Max:0.144 | 0.01 | 0.0001 |
| teacher.backbone.layer4.2.bn2.weight | N | 512 | Min:0.516 Max:1.335 | 0.01 | 0.0001 |
| teacher.backbone.layer4.2.bn2.bias | N | 512 | Min:-1.871 Max:1.181 | 0.01 | 0.0001 |
| teacher.backbone.layer4.2.conv3.weight | N | 2048X512X1X1 | Min:-0.135 Max:0.300 | 0.01 | 0.0001 |
| teacher.backbone.layer4.2.bn3.weight | N | 2048 | Min:0.435 Max:3.073 | 0.01 | 0.0001 |
| teacher.backbone.layer4.2.bn3.bias | N | 2048 | Min:-3.885 Max:-0.249 | 0.01 | 0.0001 |
| teacher.neck.lateral_convs.0.conv.weight | N | 256X256X1X1 | Min:-0.108 Max:0.108 | 0.01 | 0.0001 |
| teacher.neck.lateral_convs.0.conv.bias | N | 256 | Min:0.000 Max:0.000 | 0.01 | 0.0001 |
| teacher.neck.lateral_convs.1.conv.weight | N | 256X512X1X1 | Min:-0.088 Max:0.088 | 0.01 | 0.0001 |
| teacher.neck.lateral_convs.1.conv.bias | N | 256 | Min:0.000 Max:0.000 | 0.01 | 0.0001 |
| teacher.neck.lateral_convs.2.conv.weight | N | 256X1024X1X1 | Min:-0.068 Max:0.068 | 0.01 | 0.0001 |
| teacher.neck.lateral_convs.2.conv.bias | N | 256 | Min:0.000 Max:0.000 | 0.01 | 0.0001 |
| teacher.neck.lateral_convs.3.conv.weight | N | 256X2048X1X1 | Min:-0.051 Max:0.051 | 0.01 | 0.0001 |
| teacher.neck.lateral_convs.3.conv.bias | N | 256 | Min:0.000 Max:0.000 | 0.01 | 0.0001 |
| teacher.neck.fpn_convs.0.conv.weight | N | 256X256X3X3 | Min:-0.036 Max:0.036 | 0.01 | 0.0001 |
| teacher.neck.fpn_convs.0.conv.bias | N | 256 | Min:0.000 Max:0.000 | 0.01 | 0.0001 |
| teacher.neck.fpn_convs.1.conv.weight | N | 256X256X3X3 | Min:-0.036 Max:0.036 | 0.01 | 0.0001 |
| teacher.neck.fpn_convs.1.conv.bias | N | 256 | Min:0.000 Max:0.000 | 0.01 | 0.0001 |
| teacher.neck.fpn_convs.2.conv.weight | N | 256X256X3X3 | Min:-0.036 Max:0.036 | 0.01 | 0.0001 |
| teacher.neck.fpn_convs.2.conv.bias | N | 256 | Min:0.000 Max:0.000 | 0.01 | 0.0001 |
| teacher.neck.fpn_convs.3.conv.weight | N | 256X256X3X3 | Min:-0.036 Max:0.036 | 0.01 | 0.0001 |
| teacher.neck.fpn_convs.3.conv.bias | N | 256 | Min:0.000 Max:0.000 | 0.01 | 0.0001 |
| teacher.rpn_head.rpn_conv.weight | N | 256X256X3X3 | Min:-0.046 Max:0.050 | 0.01 | 0.0001 |
| teacher.rpn_head.rpn_conv.bias | N | 256 | Min:0.000 Max:0.000 | 0.01 | 0.0001 |
| teacher.rpn_head.rpn_cls.weight | N | 3X256X1X1 | Min:-0.034 Max:0.028 | 0.01 | 0.0001 |
| teacher.rpn_head.rpn_cls.bias | N | 3 | Min:0.000 Max:0.000 | 0.01 | 0.0001 |
| teacher.rpn_head.rpn_reg.weight | N | 12X256X1X1 | Min:-0.034 Max:0.039 | 0.01 | 0.0001 |
| teacher.rpn_head.rpn_reg.bias | N | 12 | Min:0.000 Max:0.000 | 0.01 | 0.0001 |
| teacher.roi_head.bbox_head.fc_cls.weight | N | 81X1024 | Min:-0.177 Max:0.196 | 0.01 | 0.0001 |
| teacher.roi_head.bbox_head.fc_cls.bias | N | 81 | Min:0.000 Max:0.000 | 0.01 | 0.0001 |
| teacher.roi_head.bbox_head.fc_reg.weight | N | 320X1024 | Min:-0.185 Max:0.207 | 0.01 | 0.0001 |
| teacher.roi_head.bbox_head.fc_reg.bias | N | 320 | Min:0.000 Max:0.000 | 0.01 | 0.0001 |
| teacher.roi_head.bbox_head.shared_fcs.0.weight | N | 1024X12544 | Min:-0.068 Max:0.062 | 0.01 | 0.0001 |
| teacher.roi_head.bbox_head.shared_fcs.0.bias | N | 1024 | Min:0.000 Max:0.000 | 0.01 | 0.0001 |
| teacher.roi_head.bbox_head.shared_fcs.1.weight | N | 1024X1024 | Min:-0.144 Max:0.147 | 0.01 | 0.0001 |
| teacher.roi_head.bbox_head.shared_fcs.1.bias | N | 1024 | Min:0.000 Max:0.000 | 0.01 | 0.0001 |
| student.backbone.conv1.weight | N | 64X3X7X7 | Min:-0.671 Max:0.704 | 0.01 | 0.0001 |
| student.backbone.bn1.weight | N | 64 | Min:0.513 Max:2.669 | 0.01 | 0.0001 |
| student.backbone.bn1.bias | N | 64 | Min:-2.654 Max:6.354 | 0.01 | 0.0001 |
| student.backbone.layer1.0.conv1.weight | N | 64X64X1X1 | Min:-0.717 Max:0.392 | 0.01 | 0.0001 |
| student.backbone.layer1.0.bn1.weight | N | 64 | Min:0.509 Max:2.066 | 0.01 | 0.0001 |
| student.backbone.layer1.0.bn1.bias | N | 64 | Min:-2.411 Max:3.608 | 0.01 | 0.0001 |
| student.backbone.layer1.0.conv2.weight | N | 64X64X3X3 | Min:-0.390 Max:0.364 | 0.01 | 0.0001 |
| student.backbone.layer1.0.bn2.weight | N | 64 | Min:0.420 Max:2.530 | 0.01 | 0.0001 |
| student.backbone.layer1.0.bn2.bias | N | 64 | Min:-2.286 Max:5.913 | 0.01 | 0.0001 |
| student.backbone.layer1.0.conv3.weight | N | 256X64X1X1 | Min:-0.397 Max:0.348 | 0.01 | 0.0001 |
| student.backbone.layer1.0.bn3.weight | N | 256 | Min:0.011 Max:2.820 | 0.01 | 0.0001 |
| student.backbone.layer1.0.bn3.bias | N | 256 | Min:-1.126 Max:1.522 | 0.01 | 0.0001 |
| student.backbone.layer1.0.downsample.0.weight | N | 256X64X1X1 | Min:-0.772 Max:0.900 | 0.01 | 0.0001 |
| student.backbone.layer1.0.downsample.1.weight | N | 256 | Min:0.004 Max:3.064 | 0.01 | 0.0001 |
| student.backbone.layer1.0.downsample.1.bias | N | 256 | Min:-1.126 Max:1.522 | 0.01 | 0.0001 |
| student.backbone.layer1.1.conv1.weight | N | 64X256X1X1 | Min:-0.297 Max:0.220 | 0.01 | 0.0001 |
| student.backbone.layer1.1.bn1.weight | N | 64 | Min:0.746 Max:1.949 | 0.01 | 0.0001 |
| student.backbone.layer1.1.bn1.bias | N | 64 | Min:-1.688 Max:1.578 | 0.01 | 0.0001 |
| student.backbone.layer1.1.conv2.weight | N | 64X64X3X3 | Min:-0.240 Max:0.318 | 0.01 | 0.0001 |
| student.backbone.layer1.1.bn2.weight | N | 64 | Min:0.621 Max:1.618 | 0.01 | 0.0001 |
| student.backbone.layer1.1.bn2.bias | N | 64 | Min:-2.003 Max:2.398 | 0.01 | 0.0001 |
| student.backbone.layer1.1.conv3.weight | N | 256X64X1X1 | Min:-0.240 Max:0.280 | 0.01 | 0.0001 |
| student.backbone.layer1.1.bn3.weight | N | 256 | Min:-0.017 Max:2.130 | 0.01 | 0.0001 |
| student.backbone.layer1.1.bn3.bias | N | 256 | Min:-1.711 Max:1.291 | 0.01 | 0.0001 |
| student.backbone.layer1.2.conv1.weight | N | 64X256X1X1 | Min:-0.210 Max:0.264 | 0.01 | 0.0001 |
| student.backbone.layer1.2.bn1.weight | N | 64 | Min:0.574 Max:1.688 | 0.01 | 0.0001 |
| student.backbone.layer1.2.bn1.bias | N | 64 | Min:-1.876 Max:1.090 | 0.01 | 0.0001 |
| student.backbone.layer1.2.conv2.weight | N | 64X64X3X3 | Min:-0.218 Max:0.201 | 0.01 | 0.0001 |
| student.backbone.layer1.2.bn2.weight | N | 64 | Min:0.757 Max:1.649 | 0.01 | 0.0001 |
| student.backbone.layer1.2.bn2.bias | N | 64 | Min:-2.221 Max:1.878 | 0.01 | 0.0001 |
| student.backbone.layer1.2.conv3.weight | N | 256X64X1X1 | Min:-0.275 Max:0.350 | 0.01 | 0.0001 |
| student.backbone.layer1.2.bn3.weight | N | 256 | Min:-0.058 Max:2.154 | 0.01 | 0.0001 |
| student.backbone.layer1.2.bn3.bias | N | 256 | Min:-1.570 Max:1.535 | 0.01 | 0.0001 |
| student.backbone.layer2.0.conv1.weight | Y | 128X256X1X1 | Min:-0.334 Max:0.300 | 0.01 | 0.0001 |
| student.backbone.layer2.0.bn1.weight | N | 128 | Min:0.610 Max:1.642 | 0.01 | 0.0001 |
| student.backbone.layer2.0.bn1.bias | N | 128 | Min:-1.579 Max:1.449 | 0.01 | 0.0001 |
| student.backbone.layer2.0.conv2.weight | Y | 128X128X3X3 | Min:-0.384 Max:0.377 | 0.01 | 0.0001 |
| student.backbone.layer2.0.bn2.weight | N | 128 | Min:0.605 Max:1.622 | 0.01 | 0.0001 |
| student.backbone.layer2.0.bn2.bias | N | 128 | Min:-2.768 Max:1.747 | 0.01 | 0.0001 |
| student.backbone.layer2.0.conv3.weight | Y | 512X128X1X1 | Min:-0.374 Max:0.434 | 0.01 | 0.0001 |
| student.backbone.layer2.0.bn3.weight | N | 512 | Min:-0.007 Max:2.730 | 0.01 | 0.0001 |
| student.backbone.layer2.0.bn3.bias | N | 512 | Min:-1.545 Max:1.256 | 0.01 | 0.0001 |
| student.backbone.layer2.0.downsample.0.weight | Y | 512X256X1X1 | Min:-0.466 Max:0.642 | 0.01 | 0.0001 |
| student.backbone.layer2.0.downsample.1.weight | N | 512 | Min:0.006 Max:2.552 | 0.01 | 0.0001 |
| student.backbone.layer2.0.downsample.1.bias | N | 512 | Min:-1.545 Max:1.256 | 0.01 | 0.0001 |
| student.backbone.layer2.1.conv1.weight | Y | 128X512X1X1 | Min:-0.162 Max:0.195 | 0.01 | 0.0001 |
| student.backbone.layer2.1.bn1.weight | N | 128 | Min:0.578 Max:1.429 | 0.01 | 0.0001 |
| student.backbone.layer2.1.bn1.bias | N | 128 | Min:-4.348 Max:0.588 | 0.01 | 0.0001 |
| student.backbone.layer2.1.conv2.weight | Y | 128X128X3X3 | Min:-0.176 Max:0.177 | 0.01 | 0.0001 |
| student.backbone.layer2.1.bn2.weight | N | 128 | Min:0.511 Max:1.794 | 0.01 | 0.0001 |
| student.backbone.layer2.1.bn2.bias | N | 128 | Min:-3.825 Max:1.343 | 0.01 | 0.0001 |
| student.backbone.layer2.1.conv3.weight | Y | 512X128X1X1 | Min:-0.344 Max:0.336 | 0.01 | 0.0001 |
| student.backbone.layer2.1.bn3.weight | N | 512 | Min:-0.072 Max:2.122 | 0.01 | 0.0001 |
| student.backbone.layer2.1.bn3.bias | N | 512 | Min:-1.502 Max:1.166 | 0.01 | 0.0001 |
| student.backbone.layer2.2.conv1.weight | Y | 128X512X1X1 | Min:-0.330 Max:0.369 | 0.01 | 0.0001 |
| student.backbone.layer2.2.bn1.weight | N | 128 | Min:0.406 Max:1.696 | 0.01 | 0.0001 |
| student.backbone.layer2.2.bn1.bias | N | 128 | Min:-2.696 Max:1.944 | 0.01 | 0.0001 |
| student.backbone.layer2.2.conv2.weight | Y | 128X128X3X3 | Min:-0.326 Max:0.374 | 0.01 | 0.0001 |
| student.backbone.layer2.2.bn2.weight | N | 128 | Min:0.460 Max:2.179 | 0.01 | 0.0001 |
| student.backbone.layer2.2.bn2.bias | N | 128 | Min:-1.587 Max:0.589 | 0.01 | 0.0001 |
| student.backbone.layer2.2.conv3.weight | Y | 512X128X1X1 | Min:-0.288 Max:0.232 | 0.01 | 0.0001 |
| student.backbone.layer2.2.bn3.weight | N | 512 | Min:-0.006 Max:3.043 | 0.01 | 0.0001 |
| student.backbone.layer2.2.bn3.bias | N | 512 | Min:-2.369 Max:0.440 | 0.01 | 0.0001 |
| student.backbone.layer2.3.conv1.weight | Y | 128X512X1X1 | Min:-0.298 Max:0.346 | 0.01 | 0.0001 |
| student.backbone.layer2.3.bn1.weight | N | 128 | Min:0.736 Max:2.394 | 0.01 | 0.0001 |
| student.backbone.layer2.3.bn1.bias | N | 128 | Min:-2.643 Max:0.756 | 0.01 | 0.0001 |
| student.backbone.layer2.3.conv2.weight | Y | 128X128X3X3 | Min:-0.272 Max:0.208 | 0.01 | 0.0001 |
| student.backbone.layer2.3.bn2.weight | N | 128 | Min:0.682 Max:1.694 | 0.01 | 0.0001 |
| student.backbone.layer2.3.bn2.bias | N | 128 | Min:-1.365 Max:1.599 | 0.01 | 0.0001 |
| student.backbone.layer2.3.conv3.weight | Y | 512X128X1X1 | Min:-0.279 Max:0.281 | 0.01 | 0.0001 |
| student.backbone.layer2.3.bn3.weight | N | 512 | Min:-0.009 Max:1.721 | 0.01 | 0.0001 |
| student.backbone.layer2.3.bn3.bias | N | 512 | Min:-1.897 Max:1.182 | 0.01 | 0.0001 |
| student.backbone.layer3.0.conv1.weight | Y | 256X512X1X1 | Min:-0.230 Max:0.341 | 0.01 | 0.0001 |
| student.backbone.layer3.0.bn1.weight | N | 256 | Min:0.621 Max:1.636 | 0.01 | 0.0001 |
| student.backbone.layer3.0.bn1.bias | N | 256 | Min:-1.420 Max:0.917 | 0.01 | 0.0001 |
| student.backbone.layer3.0.conv2.weight | Y | 256X256X3X3 | Min:-0.267 Max:0.179 | 0.01 | 0.0001 |
| student.backbone.layer3.0.bn2.weight | N | 256 | Min:0.585 Max:1.749 | 0.01 | 0.0001 |
| student.backbone.layer3.0.bn2.bias | N | 256 | Min:-1.837 Max:1.398 | 0.01 | 0.0001 |
| student.backbone.layer3.0.conv3.weight | Y | 1024X256X1X1 | Min:-0.333 Max:0.384 | 0.01 | 0.0001 |
| student.backbone.layer3.0.bn3.weight | N | 1024 | Min:0.071 Max:2.367 | 0.01 | 0.0001 |
| student.backbone.layer3.0.bn3.bias | N | 1024 | Min:-0.938 Max:0.887 | 0.01 | 0.0001 |
| student.backbone.layer3.0.downsample.0.weight | Y | 1024X512X1X1 | Min:-0.333 Max:0.421 | 0.01 | 0.0001 |
| student.backbone.layer3.0.downsample.1.weight | N | 1024 | Min:0.034 Max:2.779 | 0.01 | 0.0001 |
| student.backbone.layer3.0.downsample.1.bias | N | 1024 | Min:-0.938 Max:0.887 | 0.01 | 0.0001 |
| student.backbone.layer3.1.conv1.weight | Y | 256X1024X1X1 | Min:-0.197 Max:0.236 | 0.01 | 0.0001 |
| student.backbone.layer3.1.bn1.weight | N | 256 | Min:0.566 Max:1.743 | 0.01 | 0.0001 |
| student.backbone.layer3.1.bn1.bias | N | 256 | Min:-2.703 Max:1.042 | 0.01 | 0.0001 |
| student.backbone.layer3.1.conv2.weight | Y | 256X256X3X3 | Min:-0.436 Max:0.196 | 0.01 | 0.0001 |
| student.backbone.layer3.1.bn2.weight | N | 256 | Min:0.515 Max:2.301 | 0.01 | 0.0001 |
| student.backbone.layer3.1.bn2.bias | N | 256 | Min:-2.548 Max:1.856 | 0.01 | 0.0001 |
| student.backbone.layer3.1.conv3.weight | Y | 1024X256X1X1 | Min:-0.438 Max:0.295 | 0.01 | 0.0001 |
| student.backbone.layer3.1.bn3.weight | N | 1024 | Min:0.055 Max:1.943 | 0.01 | 0.0001 |
| student.backbone.layer3.1.bn3.bias | N | 1024 | Min:-1.647 Max:1.016 | 0.01 | 0.0001 |
| student.backbone.layer3.2.conv1.weight | Y | 256X1024X1X1 | Min:-0.387 Max:0.337 | 0.01 | 0.0001 |
| student.backbone.layer3.2.bn1.weight | N | 256 | Min:0.463 Max:1.886 | 0.01 | 0.0001 |
| student.backbone.layer3.2.bn1.bias | N | 256 | Min:-2.399 Max:0.488 | 0.01 | 0.0001 |
| student.backbone.layer3.2.conv2.weight | Y | 256X256X3X3 | Min:-0.165 Max:0.258 | 0.01 | 0.0001 |
| student.backbone.layer3.2.bn2.weight | N | 256 | Min:0.555 Max:1.901 | 0.01 | 0.0001 |
| student.backbone.layer3.2.bn2.bias | N | 256 | Min:-1.655 Max:0.704 | 0.01 | 0.0001 |
| student.backbone.layer3.2.conv3.weight | Y | 1024X256X1X1 | Min:-0.290 Max:0.261 | 0.01 | 0.0001 |
| student.backbone.layer3.2.bn3.weight | N | 1024 | Min:0.049 Max:1.450 | 0.01 | 0.0001 |
| student.backbone.layer3.2.bn3.bias | N | 1024 | Min:-1.201 Max:0.587 | 0.01 | 0.0001 |
| student.backbone.layer3.3.conv1.weight | Y | 256X1024X1X1 | Min:-0.194 Max:0.295 | 0.01 | 0.0001 |
| student.backbone.layer3.3.bn1.weight | N | 256 | Min:0.442 Max:1.353 | 0.01 | 0.0001 |
| student.backbone.layer3.3.bn1.bias | N | 256 | Min:-2.322 Max:0.509 | 0.01 | 0.0001 |
| student.backbone.layer3.3.conv2.weight | Y | 256X256X3X3 | Min:-0.201 Max:0.176 | 0.01 | 0.0001 |
| student.backbone.layer3.3.bn2.weight | N | 256 | Min:0.529 Max:1.939 | 0.01 | 0.0001 |
| student.backbone.layer3.3.bn2.bias | N | 256 | Min:-1.610 Max:0.776 | 0.01 | 0.0001 |
| student.backbone.layer3.3.conv3.weight | Y | 1024X256X1X1 | Min:-0.205 Max:0.239 | 0.01 | 0.0001 |
| student.backbone.layer3.3.bn3.weight | N | 1024 | Min:-0.037 Max:1.646 | 0.01 | 0.0001 |
| student.backbone.layer3.3.bn3.bias | N | 1024 | Min:-1.484 Max:0.344 | 0.01 | 0.0001 |
| student.backbone.layer3.4.conv1.weight | Y | 256X1024X1X1 | Min:-0.226 Max:0.306 | 0.01 | 0.0001 |
| student.backbone.layer3.4.bn1.weight | N | 256 | Min:0.438 Max:1.446 | 0.01 | 0.0001 |
| student.backbone.layer3.4.bn1.bias | N | 256 | Min:-2.511 Max:0.557 | 0.01 | 0.0001 |
| student.backbone.layer3.4.conv2.weight | Y | 256X256X3X3 | Min:-0.147 Max:0.223 | 0.01 | 0.0001 |
| student.backbone.layer3.4.bn2.weight | N | 256 | Min:0.651 Max:1.858 | 0.01 | 0.0001 |
| student.backbone.layer3.4.bn2.bias | N | 256 | Min:-1.588 Max:0.661 | 0.01 | 0.0001 |
| student.backbone.layer3.4.conv3.weight | Y | 1024X256X1X1 | Min:-0.178 Max:0.265 | 0.01 | 0.0001 |
| student.backbone.layer3.4.bn3.weight | N | 1024 | Min:-0.001 Max:1.501 | 0.01 | 0.0001 |
| student.backbone.layer3.4.bn3.bias | N | 1024 | Min:-1.108 Max:0.639 | 0.01 | 0.0001 |
| student.backbone.layer3.5.conv1.weight | Y | 256X1024X1X1 | Min:-0.153 Max:0.330 | 0.01 | 0.0001 |
| student.backbone.layer3.5.bn1.weight | N | 256 | Min:0.425 Max:1.547 | 0.01 | 0.0001 |
| student.backbone.layer3.5.bn1.bias | N | 256 | Min:-1.972 Max:0.823 | 0.01 | 0.0001 |
| student.backbone.layer3.5.conv2.weight | Y | 256X256X3X3 | Min:-0.293 Max:0.276 | 0.01 | 0.0001 |
| student.backbone.layer3.5.bn2.weight | N | 256 | Min:0.650 Max:2.942 | 0.01 | 0.0001 |
| student.backbone.layer3.5.bn2.bias | N | 256 | Min:-1.093 Max:0.771 | 0.01 | 0.0001 |
| student.backbone.layer3.5.conv3.weight | Y | 1024X256X1X1 | Min:-0.232 Max:0.294 | 0.01 | 0.0001 |
| student.backbone.layer3.5.bn3.weight | N | 1024 | Min:0.004 Max:1.984 | 0.01 | 0.0001 |
| student.backbone.layer3.5.bn3.bias | N | 1024 | Min:-1.636 Max:1.250 | 0.01 | 0.0001 |
| student.backbone.layer4.0.conv1.weight | Y | 512X1024X1X1 | Min:-0.184 Max:0.331 | 0.01 | 0.0001 |
| student.backbone.layer4.0.bn1.weight | N | 512 | Min:0.535 Max:1.594 | 0.01 | 0.0001 |
| student.backbone.layer4.0.bn1.bias | N | 512 | Min:-1.756 Max:0.288 | 0.01 | 0.0001 |
| student.backbone.layer4.0.conv2.weight | Y | 512X512X3X3 | Min:-0.175 Max:0.272 | 0.01 | 0.0001 |
| student.backbone.layer4.0.bn2.weight | N | 512 | Min:0.456 Max:1.542 | 0.01 | 0.0001 |
| student.backbone.layer4.0.bn2.bias | N | 512 | Min:-1.820 Max:0.839 | 0.01 | 0.0001 |
| student.backbone.layer4.0.conv3.weight | Y | 2048X512X1X1 | Min:-0.332 Max:0.432 | 0.01 | 0.0001 |
| student.backbone.layer4.0.bn3.weight | N | 2048 | Min:0.888 Max:3.492 | 0.01 | 0.0001 |
| student.backbone.layer4.0.bn3.bias | N | 2048 | Min:-1.810 Max:0.980 | 0.01 | 0.0001 |
| student.backbone.layer4.0.downsample.0.weight | Y | 2048X1024X1X1 | Min:-0.622 Max:0.465 | 0.01 | 0.0001 |
| student.backbone.layer4.0.downsample.1.weight | N | 2048 | Min:0.261 Max:4.575 | 0.01 | 0.0001 |
| student.backbone.layer4.0.downsample.1.bias | N | 2048 | Min:-1.810 Max:0.980 | 0.01 | 0.0001 |
| student.backbone.layer4.1.conv1.weight | Y | 512X2048X1X1 | Min:-0.316 Max:0.577 | 0.01 | 0.0001 |
| student.backbone.layer4.1.bn1.weight | N | 512 | Min:0.398 Max:1.429 | 0.01 | 0.0001 |
| student.backbone.layer4.1.bn1.bias | N | 512 | Min:-1.380 Max:0.428 | 0.01 | 0.0001 |
| student.backbone.layer4.1.conv2.weight | Y | 512X512X3X3 | Min:-0.217 Max:0.284 | 0.01 | 0.0001 |
| student.backbone.layer4.1.bn2.weight | N | 512 | Min:0.349 Max:1.550 | 0.01 | 0.0001 |
| student.backbone.layer4.1.bn2.bias | N | 512 | Min:-1.867 Max:0.880 | 0.01 | 0.0001 |
| student.backbone.layer4.1.conv3.weight | Y | 2048X512X1X1 | Min:-0.200 Max:0.277 | 0.01 | 0.0001 |
| student.backbone.layer4.1.bn3.weight | N | 2048 | Min:0.574 Max:2.847 | 0.01 | 0.0001 |
| student.backbone.layer4.1.bn3.bias | N | 2048 | Min:-2.638 Max:0.544 | 0.01 | 0.0001 |
| student.backbone.layer4.2.conv1.weight | Y | 512X2048X1X1 | Min:-0.289 Max:0.514 | 0.01 | 0.0001 |
| student.backbone.layer4.2.bn1.weight | N | 512 | Min:0.366 Max:1.249 | 0.01 | 0.0001 |
| student.backbone.layer4.2.bn1.bias | N | 512 | Min:-1.664 Max:0.753 | 0.01 | 0.0001 |
| student.backbone.layer4.2.conv2.weight | Y | 512X512X3X3 | Min:-0.142 Max:0.144 | 0.01 | 0.0001 |
| student.backbone.layer4.2.bn2.weight | N | 512 | Min:0.516 Max:1.335 | 0.01 | 0.0001 |
| student.backbone.layer4.2.bn2.bias | N | 512 | Min:-1.871 Max:1.181 | 0.01 | 0.0001 |
| student.backbone.layer4.2.conv3.weight | Y | 2048X512X1X1 | Min:-0.135 Max:0.300 | 0.01 | 0.0001 |
| student.backbone.layer4.2.bn3.weight | N | 2048 | Min:0.435 Max:3.073 | 0.01 | 0.0001 |
| student.backbone.layer4.2.bn3.bias | N | 2048 | Min:-3.885 Max:-0.249 | 0.01 | 0.0001 |
| student.neck.lateral_convs.0.conv.weight | Y | 256X256X1X1 | Min:-0.108 Max:0.108 | 0.01 | 0.0001 |
| student.neck.lateral_convs.0.conv.bias | Y | 256 | Min:0.000 Max:0.000 | 0.01 | 0.0001 |
| student.neck.lateral_convs.1.conv.weight | Y | 256X512X1X1 | Min:-0.088 Max:0.088 | 0.01 | 0.0001 |
| student.neck.lateral_convs.1.conv.bias | Y | 256 | Min:0.000 Max:0.000 | 0.01 | 0.0001 |
| student.neck.lateral_convs.2.conv.weight | Y | 256X1024X1X1 | Min:-0.068 Max:0.068 | 0.01 | 0.0001 |
| student.neck.lateral_convs.2.conv.bias | Y | 256 | Min:0.000 Max:0.000 | 0.01 | 0.0001 |
| student.neck.lateral_convs.3.conv.weight | Y | 256X2048X1X1 | Min:-0.051 Max:0.051 | 0.01 | 0.0001 |
| student.neck.lateral_convs.3.conv.bias | Y | 256 | Min:0.000 Max:0.000 | 0.01 | 0.0001 |
| student.neck.fpn_convs.0.conv.weight | Y | 256X256X3X3 | Min:-0.036 Max:0.036 | 0.01 | 0.0001 |
| student.neck.fpn_convs.0.conv.bias | Y | 256 | Min:0.000 Max:0.000 | 0.01 | 0.0001 |
| student.neck.fpn_convs.1.conv.weight | Y | 256X256X3X3 | Min:-0.036 Max:0.036 | 0.01 | 0.0001 |
| student.neck.fpn_convs.1.conv.bias | Y | 256 | Min:0.000 Max:0.000 | 0.01 | 0.0001 |
| student.neck.fpn_convs.2.conv.weight | Y | 256X256X3X3 | Min:-0.036 Max:0.036 | 0.01 | 0.0001 |
| student.neck.fpn_convs.2.conv.bias | Y | 256 | Min:0.000 Max:0.000 | 0.01 | 0.0001 |
| student.neck.fpn_convs.3.conv.weight | Y | 256X256X3X3 | Min:-0.036 Max:0.036 | 0.01 | 0.0001 |
| student.neck.fpn_convs.3.conv.bias | Y | 256 | Min:0.000 Max:0.000 | 0.01 | 0.0001 |
| student.rpn_head.rpn_conv.weight | Y | 256X256X3X3 | Min:-0.051 Max:0.052 | 0.01 | 0.0001 |
| student.rpn_head.rpn_conv.bias | Y | 256 | Min:0.000 Max:0.000 | 0.01 | 0.0001 |
| student.rpn_head.rpn_cls.weight | Y | 3X256X1X1 | Min:-0.030 Max:0.033 | 0.01 | 0.0001 |
| student.rpn_head.rpn_cls.bias | Y | 3 | Min:0.000 Max:0.000 | 0.01 | 0.0001 |
| student.rpn_head.rpn_reg.weight | Y | 12X256X1X1 | Min:-0.039 Max:0.035 | 0.01 | 0.0001 |
| student.rpn_head.rpn_reg.bias | Y | 12 | Min:0.000 Max:0.000 | 0.01 | 0.0001 |
| student.roi_head.bbox_head.fc_cls.weight | Y | 81X1024 | Min:-0.168 Max:0.177 | 0.01 | 0.0001 |
| student.roi_head.bbox_head.fc_cls.bias | Y | 81 | Min:0.000 Max:0.000 | 0.01 | 0.0001 |
| student.roi_head.bbox_head.fc_reg.weight | Y | 320X1024 | Min:-0.171 Max:0.177 | 0.01 | 0.0001 |
| student.roi_head.bbox_head.fc_reg.bias | Y | 320 | Min:0.000 Max:0.000 | 0.01 | 0.0001 |
| student.roi_head.bbox_head.shared_fcs.0.weight | Y | 1024X12544 | Min:-0.062 Max:0.062 | 0.01 | 0.0001 |
| student.roi_head.bbox_head.shared_fcs.0.bias | Y | 1024 | Min:0.000 Max:0.000 | 0.01 | 0.0001 |
| student.roi_head.bbox_head.shared_fcs.1.weight | Y | 1024X1024 | Min:-0.145 Max:0.147 | 0.01 | 0.0001 |
| student.roi_head.bbox_head.shared_fcs.1.bias | Y | 1024 | Min:0.000 Max:0.000 | 0.01 | 0.0001 |
+------------------------------------------------+-----------+---------------+-----------------------+------+--------+

len(self) 29320
len(indices) 20488
Traceback (most recent call last):
File "tools/train.py", line 198, in
main()
File "tools/train.py", line 193, in main
meta=meta,
File "/data6/ziqiwen/code/softteacher/ssod/apis/train.py", line 206, in train_detector
runner.run(data_loaders, cfg.workflow)
File "/home/ziqiwen/anaconda3/envs/mm/lib/python3.7/site-packages/mmcv/runner/iter_based_runner.py", line 117, in run
iter_loaders = [IterLoader(x) for x in data_loaders]
File "/home/ziqiwen/anaconda3/envs/mm/lib/python3.7/site-packages/mmcv/runner/iter_based_runner.py", line 117, in
iter_loaders = [IterLoader(x) for x in data_loaders]
File "/home/ziqiwen/anaconda3/envs/mm/lib/python3.7/site-packages/mmcv/runner/iter_based_runner.py", line 23, in init
self.iter_loader = iter(self._dataloader)
File "/home/ziqiwen/anaconda3/envs/mm/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 291, in iter
return _MultiProcessingDataLoaderIter(self)
File "/home/ziqiwen/anaconda3/envs/mm/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 764, in init
self._try_put_index()
File "/home/ziqiwen/anaconda3/envs/mm/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 994, in _try_put_index
index = self._next_index()
File "/home/ziqiwen/anaconda3/envs/mm/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 357, in _next_index
return next(self._sampler_iter) # may raise StopIteration
File "/home/ziqiwen/anaconda3/envs/mm/lib/python3.7/site-packages/torch/utils/data/sampler.py", line 208, in iter
for idx in self.sampler:
File "/data6/ziqiwen/code/softteacher/ssod/datasets/samplers/semi_sampler.py", line 188, in iter
assert len(indices) == len(self)
AssertionError
Traceback (most recent call last):
File "/home/ziqiwen/anaconda3/envs/mm/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/home/ziqiwen/anaconda3/envs/mm/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/ziqiwen/anaconda3/envs/mm/lib/python3.7/site-packages/torch/distributed/launch.py", line 261, in
main()
File "/home/ziqiwen/anaconda3/envs/mm/lib/python3.7/site-packages/torch/distributed/launch.py", line 257, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/home/ziqiwen/anaconda3/envs/mm/bin/python', '-u', 'tools/train.py', '--local_rank=0', 'configs/soft_teacher/soft_teacher_faster_rcnn_r50_caffe_fpn_coco_180k.py', '--launcher', 'pytorch']' returned non-zero exit status 1.

@MendelXu
Copy link
Collaborator

I am not sure what the problem is. I have tried to change the samplers_per_gpu to 4 like you, but it works well. I will take a deeper look later.

@winnerziqi
Copy link
Author

thanks a lot!!!

@MendelXu
Copy link
Collaborator

Could you have a look at the json file and check whether correct number of instances is loaded? I have tried to build a similar python environment and similar config but it seems ok.

@jackhu-bme
Copy link

I guess the parametyer epoch_length is too small for your dataset since I encountered the same problem in my medical daataset weeks ago and solved this simply by turning it bigger. I haven't carefully look at the sampler code so it's just a simple and maybe unreasonable guess.

@jessicametzger
Copy link

jessicametzger commented Nov 1, 2021

I am having this same issue. Regardless of what I set epoch_length to, len(indices) always ends up slightly smaller than len(self)=sum(epoch_length)*samples_per_gpu. Here's my config:

_base_ = [parent_dir+'SoftTeacher/configs/soft_teacher/base.py']

data = dict(
	samples_per_gpu=2,
	workers_per_gpu=2,
	train = dict(
		sup = dict(		
            ann_file=parent_dir+sup_data_path+'/train_data/annotations.json',
            img_prefix=parent_dir+sup_data_path+'/train_data/',
			classes=classes
		),
		unsup = dict(		
            ann_file=parent_dir+unsup_data_path+'/train_data/annotations.json',
            img_prefix=parent_dir+unsup_data_path+'/train_data/',
			classes=classes
		),
	),
	val = dict(
        ann_file=parent_dir+sup_data_path+'/val_data/annotations.json',
        img_prefix=parent_dir+sup_data_path+'/val_data/',
		classes=classes
	),
	test = dict(
        ann_file=parent_dir+sup_data_path+'/val_data/annotations.json',
        img_prefix=parent_dir+sup_data_path+'/val_data/',
		classes=classes
	),
	sampler=dict(
        train=dict(
            type="SemiBalanceSampler",
            sample_ratio=[1, 4],
            by_prob=True,
            # at_least_one=True,
            epoch_length=1000,
        )
    ),
)

evaluation = dict(interval=1000, metric='bbox', type='SubModulesDistEvalHook')
optimizer = dict(type='SGD', lr=0.001, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=None)
evaluation = dict(type="SubModulesDistEvalHook", interval=4000)
optimizer = dict(type="SGD", lr=0.01, momentum=0.9, weight_decay=0.0001)
lr_config = dict(step=[3000, 4000])
runner = dict(_delete_=True, type="IterBasedRunner", max_iters=5000)
checkpoint_config = dict(by_epoch=False, interval=1000, max_keep_ckpts=2)

fp16 = dict(loss_scale="dynamic")

log_config = dict(
    interval=49,
    hooks=[
        dict(type="TextLoggerHook", by_epoch=False),
        dict(
            type="WandbLoggerHook",
            init_kwargs=dict(
                project="pre_release",
                name="${cfg_name}",
                config=dict(
                    work_dirs="${work_dir}",
                    total_step="${runner.max_iters}",
                ),
            ),
            by_epoch=False,
        ),
    ],
)

@watermellon2018
Copy link

@jessicametzger I have same problem. Did you solve this?

@jessicametzger
Copy link

@watermellon2018 I was able to to fix it by setting by_prob=False in the sampler config. So the bug is somewhere in here.

@tahirashehzadi
Copy link

tahirashehzadi commented Feb 22, 2022

@winnerziqi how you solved this issue?
assert len(indices) == len(self)
I am getting the same error

@alaa-shubbak
Copy link

where did you set self.by_prob =false exactly .. in the code its a part of if loop
can you please explain it more ?

@alaa-shubbak
Copy link

i got it. in my config, it already set to False . and i still have same problem .. what shall i do ?

@xiangtaowong
Copy link

i got it. in my config, it already set to False . and i still have same problem .. what shall i do ?

how did u solve the problem?

@alaa-shubbak
Copy link

d u solve the proble

my problem was in the present of mask annotaion, while my dataset dose not have any mask information and annotaion.
I removed any part related to mask , and try to train the model gain . for example , i removed ("gt_masks") in this

error mask

i hope this will help you.

@xiangtaowong
Copy link

thank u, I'll have a try.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants