Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spconv and Cuda error while training on my own dataset #56

Closed
SijanNeupane49 opened this issue Apr 28, 2022 · 19 comments
Closed

Spconv and Cuda error while training on my own dataset #56

SijanNeupane49 opened this issue Apr 28, 2022 · 19 comments

Comments

@SijanNeupane49
Copy link

Hi, I got the following error when training on my own dataset. However, I could reproduce the results on S3DIS dataset without any error. Can you please look into my error and give me suggestion what might have been wrong? My dataset is same like S3DIS dataset with xyzrgb format.
I have the following configuration:
CUDA = V10.2.89
torch = 1.11.0
spconv-cu102 = 2.1.21
Ubuntu 18.04
I am really sorry for a long error message.
Thank you for your help !

(softgroup2) shrijan_pf@BQ-DX1100-CT2:~/SoftGroup$ ./tools/dist_train.sh configs/softgroup_s3dis_backbone_fold5_mintA.yaml 1
2022-04-28 13:43:26,027 - INFO - Config:
model:
channels: 32
num_blocks: 7
semantic_classes: 3 #changed, original 13
instance_classes: 3 #changed, original 13
sem2ins_classes: [0, 1]
semantic_only: True
ignore_label: -100
grouping_cfg:
score_thr: 0.2
radius: 0.04
mean_active: 300
class_numpoint_mean: [1823, 7457, 6189, 7424, 34229, 1724, 5439,
6016, 39796, 5279, 5092, 12210, 10225]
npoint_thr: 0.05 # absolute if class_numpoint == -1, relative if class_numpoint != -1
ignore_classes: [-99]
instance_voxel_cfg:
scale: 50
spatial_shape: 20
train_cfg:
max_proposal_num: 200
pos_iou_thr: 0.5
test_cfg:
x4_split: True
cls_score_thr: 0.001
mask_score_thr: -0.5
min_npoint: 100
fixed_modules: []

data:
train:
type: 's3dis'
data_root: 'dataset/s3dis/preprocess_mint_sample' #changed original 'dataset/s3dis/preprocess'
prefix: ['Area_2'] # changed original ['Area_1', 'Area_2', 'Area_3', 'Area_4', 'Area_6']
suffix: '_inst_nostuff.pth'
repeat: 20
training: True
voxel_cfg:
scale: 50
spatial_shape: [128, 512]
max_npoint: 2500000
min_npoint: 5000
test:
type: 's3dis'
data_root: 'dataset/s3dis/preprocess_mint_sample' #changed original 'dataset/s3dis/preprocess'
prefix: 'Area_1' #changed, original 'Area_5'
suffix: '_inst_nostuff.pth'
training: False
voxel_cfg:
scale: 50
spatial_shape: [128, 512]
max_npoint: 2500000
min_npoint: 5000

dataloader:
train:
batch_size: 1 #changed;original was 4
num_workers: 4
test:
batch_size: 1
num_workers: 1

optimizer:
type: 'Adam'
lr: 0.004

save_cfg:
semantic: True
offset: True
instance: False

fp16: False
epochs: 20
step_epoch: 0
save_freq: 2
pretrain: './hais_ckpt_spconv2.pth'
work_dir: ''

2022-04-28 13:43:26,027 - INFO - Distributed: True
2022-04-28 13:43:26,027 - INFO - Mix precision training: False
2022-04-28 13:43:27,398 - INFO - Load train dataset: 580 scans
2022-04-28 13:43:27,399 - INFO - Load test dataset: 32 scans
2022-04-28 13:43:27,399 - INFO - Load pretrain from ./hais_ckpt_spconv2.pth
2022-04-28 13:43:27,629 - INFO - removed keys in source state_dict due to size mismatch: semantic_linear.3.weight, semantic_linear.3.bias
2022-04-28 13:43:27,629 - INFO - missing keys in source state_dict: semantic_linear.3.weight, semantic_linear.3.bias
2022-04-28 13:43:27,629 - INFO - unexpected key in source state_dict: tiny_unet.blocks.block0.conv_branch.0.weight, tiny_unet.blocks.block0.conv_branch.0.bias, tiny_unet.blocks.block0.conv_branch.0.running_mean, tiny_unet.blocks.block0.conv_branch.0.running_var, tiny_unet.blocks.block0.conv_branch.0.num_batches_tracked, tiny_unet.blocks.block0.conv_branch.2.weight, tiny_unet.blocks.block0.conv_branch.3.weight, tiny_unet.blocks.block0.conv_branch.3.bias, tiny_unet.blocks.block0.conv_branch.3.running_mean, tiny_unet.blocks.block0.conv_branch.3.running_var, tiny_unet.blocks.block0.conv_branch.3.num_batches_tracked, tiny_unet.blocks.block0.conv_branch.5.weight, tiny_unet.blocks.block1.conv_branch.0.weight, tiny_unet.blocks.block1.conv_branch.0.bias, tiny_unet.blocks.block1.conv_branch.0.running_mean, tiny_unet.blocks.block1.conv_branch.0.running_var, tiny_unet.blocks.block1.conv_branch.0.num_batches_tracked, tiny_unet.blocks.block1.conv_branch.2.weight, tiny_unet.blocks.block1.conv_branch.3.weight, tiny_unet.blocks.block1.conv_branch.3.bias, tiny_unet.blocks.block1.conv_branch.3.running_mean, tiny_unet.blocks.block1.conv_branch.3.running_var, tiny_unet.blocks.block1.conv_branch.3.num_batches_tracked, tiny_unet.blocks.block1.conv_branch.5.weight, tiny_unet.conv.0.weight, tiny_unet.conv.0.bias, tiny_unet.conv.0.running_mean, tiny_unet.conv.0.running_var, tiny_unet.conv.0.num_batches_tracked, tiny_unet.conv.2.weight, tiny_unet.u.blocks.block0.conv_branch.0.weight, tiny_unet.u.blocks.block0.conv_branch.0.bias, tiny_unet.u.blocks.block0.conv_branch.0.running_mean, tiny_unet.u.blocks.block0.conv_branch.0.running_var, tiny_unet.u.blocks.block0.conv_branch.0.num_batches_tracked, tiny_unet.u.blocks.block0.conv_branch.2.weight, tiny_unet.u.blocks.block0.conv_branch.3.weight, tiny_unet.u.blocks.block0.conv_branch.3.bias, tiny_unet.u.blocks.block0.conv_branch.3.running_mean, tiny_unet.u.blocks.block0.conv_branch.3.running_var, tiny_unet.u.blocks.block0.conv_branch.3.num_batches_tracked, tiny_unet.u.blocks.block0.conv_branch.5.weight, tiny_unet.u.blocks.block1.conv_branch.0.weight, tiny_unet.u.blocks.block1.conv_branch.0.bias, tiny_unet.u.blocks.block1.conv_branch.0.running_mean, tiny_unet.u.blocks.block1.conv_branch.0.running_var, tiny_unet.u.blocks.block1.conv_branch.0.num_batches_tracked, tiny_unet.u.blocks.block1.conv_branch.2.weight, tiny_unet.u.blocks.block1.conv_branch.3.weight, tiny_unet.u.blocks.block1.conv_branch.3.bias, tiny_unet.u.blocks.block1.conv_branch.3.running_mean, tiny_unet.u.blocks.block1.conv_branch.3.running_var, tiny_unet.u.blocks.block1.conv_branch.3.num_batches_tracked, tiny_unet.u.blocks.block1.conv_branch.5.weight, tiny_unet.deconv.0.weight, tiny_unet.deconv.0.bias, tiny_unet.deconv.0.running_mean, tiny_unet.deconv.0.running_var, tiny_unet.deconv.0.num_batches_tracked, tiny_unet.deconv.2.weight, tiny_unet.blocks_tail.block0.i_branch.0.weight, tiny_unet.blocks_tail.block0.conv_branch.0.weight, tiny_unet.blocks_tail.block0.conv_branch.0.bias, tiny_unet.blocks_tail.block0.conv_branch.0.running_mean, tiny_unet.blocks_tail.block0.conv_branch.0.running_var, tiny_unet.blocks_tail.block0.conv_branch.0.num_batches_tracked, tiny_unet.blocks_tail.block0.conv_branch.2.weight, tiny_unet.blocks_tail.block0.conv_branch.3.weight, tiny_unet.blocks_tail.block0.conv_branch.3.bias, tiny_unet.blocks_tail.block0.conv_branch.3.running_mean, tiny_unet.blocks_tail.block0.conv_branch.3.running_var, tiny_unet.blocks_tail.block0.conv_branch.3.num_batches_tracked, tiny_unet.blocks_tail.block0.conv_branch.5.weight, tiny_unet.blocks_tail.block1.conv_branch.0.weight, tiny_unet.blocks_tail.block1.conv_branch.0.bias, tiny_unet.blocks_tail.block1.conv_branch.0.running_mean, tiny_unet.blocks_tail.block1.conv_branch.0.running_var, tiny_unet.blocks_tail.block1.conv_branch.0.num_batches_tracked, tiny_unet.blocks_tail.block1.conv_branch.2.weight, tiny_unet.blocks_tail.block1.conv_branch.3.weight, tiny_unet.blocks_tail.block1.conv_branch.3.bias, tiny_unet.blocks_tail.block1.conv_branch.3.running_mean, tiny_unet.blocks_tail.block1.conv_branch.3.running_var, tiny_unet.blocks_tail.block1.conv_branch.3.num_batches_tracked, tiny_unet.blocks_tail.block1.conv_branch.5.weight, tiny_unet_outputlayer.0.weight, tiny_unet_outputlayer.0.bias, tiny_unet_outputlayer.0.running_mean, tiny_unet_outputlayer.0.running_var, tiny_unet_outputlayer.0.num_batches_tracked, iou_score_linear.weight, iou_score_linear.bias, mask_linear.0.weight, mask_linear.0.bias, mask_linear.2.weight, mask_linear.2.bias
2022-04-28 13:43:27,630 - INFO - Training
[Exception|implicit_gemm]feat=torch.Size([17843, 64]),w=torch.Size([32, 2, 2, 2, 64]),pair=torch.Size([8, 17848]),act=17848,issubm=False,istrain=True
SPCONV_DEBUG_SAVE_PATH not found, you can specify SPCONV_DEBUG_SAVE_PATH as debug data save path to save debug data which can be attached in a issue.
Traceback (most recent call last):
File "./tools/train.py", line 185, in
main()
File "./tools/train.py", line 178, in main
train(epoch, model, optimizer, scaler, train_loader, cfg, logger, writer)
File "./tools/train.py", line 48, in train
loss, log_vars = model(batch, return_loss=True)
File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 963, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/shrijan_pf/SoftGroup/softgroup/model/softgroup.py", line 97, in forward
return self.forward_train(**batch)
File "/home/shrijan_pf/SoftGroup/softgroup/util/utils.py", line 171, in wrapper
return func(*new_args, **new_kwargs)
File "/home/shrijan_pf/SoftGroup/softgroup/model/softgroup.py", line 109, in forward_train
semantic_scores, pt_offsets, output_feats = self.forward_backbone(input, v2p_map)
File "/home/shrijan_pf/SoftGroup/softgroup/model/softgroup.py", line 263, in forward_backbone
output = self.unet(output)
File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/shrijan_pf/SoftGroup/softgroup/model/blocks.py", line 139, in forward
output_decoder = self.deconv(output_decoder)
File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/spconv/pytorch/modules.py", line 137, in forward
input = module(input)
File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/spconv/pytorch/conv.py", line 446, in forward
input._timer, self.fp32_accum)
File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/cuda/amp/autocast_mode.py", line 118, in decorate_fwd
return fwd(*args, kwargs)
File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/spconv/pytorch/functional.py", line 200, in forward
raise e
File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/spconv/pytorch/functional.py", line 191, in forward
fp32_accum)
File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/spconv/pytorch/ops.py", line 1118, in implicit_gemm
fp32_accum=fp32_accum)
File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/spconv/algo.py", line 661, in tune_and_cache
GemmMainUnitTest.stream_synchronize(stream)
RuntimeError: /io/build/temp.linux-x86_64-3.7/spconv/build/src/cumm/gemm/main/GemmMainUnitTest/GemmMainUnitTest_stream_synchronize.cc(11)
CUDA error 700
terminate called after throwing an instance of 'c10::CUDAError'
what(): CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Exception raised from create_event_internal at /opt/conda/conda-bld/pytorch_1646755861072/work/c10/cuda/CUDACachingAllocator.cpp:1230 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x4d (0x7f5183c301bd in /home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #1: + 0x1ec97 (0x7f5183e9ac97 in /home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/lib/libc10_cuda.so)
frame #2: c10::cuda::CUDACachingAllocator::raw_delete(void
) + 0x23a (0x7f5183e9f1fa in /home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/lib/libc10_cuda.so)
frame #3: + 0x2edaf8 (0x7f51ca243af8 in /home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #4: c10::TensorImpl::release_resources() + 0x175 (0x7f5183c16fb5 in /home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #5: + 0x1daa99 (0x7f51ca130a99 in /home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #6: + 0x4cf71c (0x7f51ca42571c in /home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #7: THPVariable_subclass_dealloc(_object
) + 0x299 (0x7f51ca425a39 in /home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #8: + 0xfc359 (0x55c9017e7359 in /home/shrijan_pf/anaconda3/envs/softgroup2/bin/python)
frame #9: + 0xfac88 (0x55c9017e5c88 in /home/shrijan_pf/anaconda3/envs/softgroup2/bin/python)
frame #10: + 0xfac88 (0x55c9017e5c88 in /home/shrijan_pf/anaconda3/envs/softgroup2/bin/python)
frame #11: + 0xfa6a8 (0x55c9017e56a8 in /home/shrijan_pf/anaconda3/envs/softgroup2/bin/python)
frame #12: + 0xfb128 (0x55c9017e6128 in /home/shrijan_pf/anaconda3/envs/softgroup2/bin/python)
frame #13: + 0xfb13c (0x55c9017e613c in /home/shrijan_pf/anaconda3/envs/softgroup2/bin/python)
frame #14: + 0xfb13c (0x55c9017e613c in /home/shrijan_pf/anaconda3/envs/softgroup2/bin/python)
frame #15: + 0xfb13c (0x55c9017e613c in /home/shrijan_pf/anaconda3/envs/softgroup2/bin/python)
frame #16: + 0xfb13c (0x55c9017e613c in /home/shrijan_pf/anaconda3/envs/softgroup2/bin/python)
frame #17: + 0x12b4a7 (0x55c9018164a7 in /home/shrijan_pf/anaconda3/envs/softgroup2/bin/python)
frame #18: PyDict_SetItemString + 0x89 (0x55c901822a19 in /home/shrijan_pf/anaconda3/envs/softgroup2/bin/python)
frame #19: PyImport_Cleanup + 0xab (0x55c901897b8b in /home/shrijan_pf/anaconda3/envs/softgroup2/bin/python)
frame #20: Py_FinalizeEx + 0x64 (0x55c90190c714 in /home/shrijan_pf/anaconda3/envs/softgroup2/bin/python)
frame #21: + 0x232e20 (0x55c90191de20 in /home/shrijan_pf/anaconda3/envs/softgroup2/bin/python)
frame #22: _Py_UnixMain + 0x3c (0x55c90191e18c in /home/shrijan_pf/anaconda3/envs/softgroup2/bin/python)
frame #23: __libc_start_main + 0xe7 (0x7f51e65b4c87 in /lib/x86_64-linux-gnu/libc.so.6)
frame #24: + 0x1d803a (0x55c9018c303a in /home/shrijan_pf/anaconda3/envs/softgroup2/bin/python)

ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -6) local_rank: 0 (pid: 13703) of binary: /home/shrijan_pf/anaconda3/envs/softgroup2/bin/python
Traceback (most recent call last):
File "/home/shrijan_pf/anaconda3/envs/softgroup2/bin/torchrun", line 33, in
sys.exit(load_entry_point('torch==1.11.0', 'console_scripts', 'torchrun')())
File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 345, in wrapper
return f(*args, **kwargs)
File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/distributed/run.py", line 724, in main
run(args)
File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/distributed/run.py", line 718, in run
)(*cmd_args)
File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 131, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 247, in launch_agent
failures=result.failures,
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

./tools/train.py FAILED

Failures:
<NO_OTHER_FAILURES>

Root Cause (first observed failure):
[0]:
time : 2022-04-28_13:43:50
host : BQ-DX1100-CT2
rank : 0 (local_rank: 0)
exitcode : -6 (pid: 13703)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 13703

@thangvubk
Copy link
Owner

you may set num_gpu to 1 and add pdb debug checkpoint to forward_train() to see where the error happen. I found that your max_npoint is 10 times larger than default value.

@SijanNeupane49
Copy link
Author

Thanks for your quick reply. I am getting the CUDA 700 error which arises after when I pass the output data to the UNET (I have highlighted this below). However, I am not sure what is it. As I have only 3 semantic classes in my dataset, can it happen that I forgot to change something in the config file that is necessary when there are only 3 classes? In addition, I also changed the GPU number to 1 in dist_train.sh, however, I get a value error 'Unsupported nproc_per_node value: configs/softgroup_s3dis_backbone_fold5_mintA.yaml'. Am I supposed to set the num_gpu anywhere else?

[Exception|implicit_gemm]feat=torch.Size([20354, 96]),w=torch.Size([64, 2, 2, 2, 96]),pair=torch.Size([8, 265416]),act=265416,issubm=False,istrain=True
SPCONV_DEBUG_SAVE_PATH not found, you can specify SPCONV_DEBUG_SAVE_PATH as debug data save path to save debug data which can be attached in a issue.
Traceback (most recent call last):
File "./tools/train.py", line 185, in
main()
File "./tools/train.py", line 178, in main
train(epoch, model, optimizer, scaler, train_loader, cfg, logger, writer)
File "./tools/train.py", line 48, in train
loss, log_vars = model(batch, return_loss=True)
File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/shrijan_pf/SoftGroup/softgroup/model/softgroup.py", line 97, in forward
return self.forward_train(**batch)
File "/home/shrijan_pf/SoftGroup/softgroup/util/utils.py", line 171, in wrapper
return func(*new_args, **new_kwargs)
File "/home/shrijan_pf/SoftGroup/softgroup/model/softgroup.py", line 109, in forward_train
semantic_scores, pt_offsets, output_feats = self.forward_backbone(input, v2p_map)
File "/home/shrijan_pf/SoftGroup/softgroup/model/softgroup.py", line 263, in forward_backbone
output = self.unet(output)
File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/shrijan_pf/SoftGroup/softgroup/model/blocks.py", line 138, in forward
output_decoder = self.u(output_decoder)
File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/shrijan_pf/SoftGroup/softgroup/model/blocks.py", line 139, in forward
output_decoder = self.deconv(output_decoder)
File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/spconv/pytorch/modules.py", line 137, in forward
input = module(input)
File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(input, **kwargs)
File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/spconv/pytorch/conv.py", line 446, in forward
input._timer, self.fp32_accum)
File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/cuda/amp/autocast_mode.py", line 116, in decorate_fwd
return fwd(
_cast(args, cast_inputs), **_cast(kwargs, cast_inputs))
File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/spconv/pytorch/functional.py", line 200, in forward
raise e
File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/spconv/pytorch/functional.py", line 191, in forward
fp32_accum)
File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/spconv/pytorch/ops.py", line 1118, in implicit_gemm
fp32_accum=fp32_accum)
File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/spconv/algo.py", line 661, in tune_and_cache
GemmMainUnitTest.stream_synchronize(stream)
RuntimeError: /io/build/temp.linux-x86_64-3.7/spconv/build/src/cumm/gemm/main/GemmMainUnitTest/GemmMainUnitTest_stream_synchronize.cc(11)
CUDA error 700

@Atopis
Copy link

Atopis commented Apr 29, 2022

@SijanNeupane49 Hi, I have the same problem.
(ERROR: SPCONV_DEBUG_SAVE_PATH not found, you can specify SPCONV_DEBUG_SAVE_PATH as debug data save path to save debug data which can be attached in a issue.)
I have only 2 classes in my dataset. I changed the 'class_numpoint_mean'.
If you have solved this problem, could you help me? Thank you!

@thangvubk
Copy link
Owner

@SijanNeupane49 Since your bug is in Unet. I think your data after voxelization is not proper for the unet. Can you print out input.spatial_shape and input.indices.shape at forward function.

@Atopis
Copy link

Atopis commented Apr 29, 2022

train.py
2022-04-29 16:58:26,249 - INFO - Config:
model:
channels: 32
num_blocks: 7
semantic_classes: 2
instance_classes: 2
sem2ins_classes: [0, 1]
semantic_only: True
ignore_label: -100
grouping_cfg:
score_thr: 0.2
radius: 0.04
mean_active: 3 # 300
class_numpoint_mean: [ 9721, 98440 ]
npoint_thr: 0.05 # absolute if class_numpoint == -1, relative if class_numpoint != -1
ignore_classes: [] #[0, 1]
instance_voxel_cfg:
scale: 50
spatial_shape: 20
train_cfg:
max_proposal_num: 200
pos_iou_thr: 0.5
test_cfg:
x4_split: True
cls_score_thr: 0.001
mask_score_thr: -0.5
min_npoint: 100
fixed_modules: []

data:
train:
type: 's3dis'
data_root: '../dataset/s3dis/preprocess'
prefix: ['People_1', 'People_2', 'People_3', 'People_4', 'People_6']
suffix: '_inst_nostuff.pth'
repeat: 20
training: True
voxel_cfg:
scale: 50
spatial_shape: [128, 512]
max_npoint: 250000
min_npoint: 5000
test:
type: 's3dis'
data_root: '../dataset/s3dis/preprocess'
prefix: 'People_5'
suffix: '_inst_nostuff.pth'
training: False
voxel_cfg:
scale: 50 # 50
spatial_shape: [128, 512]
max_npoint: 250000
min_npoint: 5000

dataloader:
train:
batch_size: 8
num_workers: 4
test:
batch_size: 1
num_workers: 1

optimizer:
type: 'Adam'
lr: 0.004

save_cfg:
semantic: True
offset: True
instance: False

fp16: False
epochs: 20
step_epoch: 0
save_freq: 2
pretrain: '../hais_ckpt_spconv2.pth'
work_dir: ''

2022-04-29 16:58:26,249 - INFO - Distributed: False
2022-04-29 16:58:26,249 - INFO - Mix precision training: False

2022-04-29 16:58:28,973 - INFO - Load train dataset: 580 scans
2022-04-29 16:58:28,974 - INFO - Load test dataset: 5 scans
2022-04-29 16:58:28,975 - INFO - Load pretrain from ../hais_ckpt_spconv2.pth
2022-04-29 16:58:29,209 - INFO - removed keys in source state_dict due to size mismatch: semantic_linear.3.weight, semantic_linear.3.bias
2022-04-29 16:58:29,209 - INFO - missing keys in source state_dict: semantic_linear.3.weight, semantic_linear.3.bias
2022-04-29 16:58:29,209 - INFO - unexpected key in source state_dict: tiny_unet.blocks.block0.conv_branch.0.weight, tiny_unet.blocks.block0.conv_branch.0.bias, tiny_unet.blocks.block0.conv_branch.0.running_mean, tiny_unet.blocks.block0.conv_branch.0.running_var, tiny_unet.blocks.block0.conv_branch.0.num_batches_tracked, tiny_unet.blocks.block0.conv_branch.2.weight, tiny_unet.blocks.block0.conv_branch.3.weight, tiny_unet.blocks.block0.conv_branch.3.bias, tiny_unet.blocks.block0.conv_branch.3.running_mean, tiny_unet.blocks.block0.conv_branch.3.running_var, tiny_unet.blocks.block0.conv_branch.3.num_batches_tracked, tiny_unet.blocks.block0.conv_branch.5.weight, tiny_unet.blocks.block1.conv_branch.0.weight, tiny_unet.blocks.block1.conv_branch.0.bias, tiny_unet.blocks.block1.conv_branch.0.running_mean, tiny_unet.blocks.block1.conv_branch.0.running_var, tiny_unet.blocks.block1.conv_branch.0.num_batches_tracked, tiny_unet.blocks.block1.conv_branch.2.weight, tiny_unet.blocks.block1.conv_branch.3.weight, tiny_unet.blocks.block1.conv_branch.3.bias, tiny_unet.blocks.block1.conv_branch.3.running_mean, tiny_unet.blocks.block1.conv_branch.3.running_var, tiny_unet.blocks.block1.conv_branch.3.num_batches_tracked, tiny_unet.blocks.block1.conv_branch.5.weight, tiny_unet.conv.0.weight, tiny_unet.conv.0.bias, tiny_unet.conv.0.running_mean, tiny_unet.conv.0.running_var, tiny_unet.conv.0.num_batches_tracked, tiny_unet.conv.2.weight, tiny_unet.u.blocks.block0.conv_branch.0.weight, tiny_unet.u.blocks.block0.conv_branch.0.bias, tiny_unet.u.blocks.block0.conv_branch.0.running_mean, tiny_unet.u.blocks.block0.conv_branch.0.running_var, tiny_unet.u.blocks.block0.conv_branch.0.num_batches_tracked, tiny_unet.u.blocks.block0.conv_branch.2.weight, tiny_unet.u.blocks.block0.conv_branch.3.weight, tiny_unet.u.blocks.block0.conv_branch.3.bias, tiny_unet.u.blocks.block0.conv_branch.3.running_mean, tiny_unet.u.blocks.block0.conv_branch.3.running_var, tiny_unet.u.blocks.block0.conv_branch.3.num_batches_tracked, tiny_unet.u.blocks.block0.conv_branch.5.weight, tiny_unet.u.blocks.block1.conv_branch.0.weight, tiny_unet.u.blocks.block1.conv_branch.0.bias, tiny_unet.u.blocks.block1.conv_branch.0.running_mean, tiny_unet.u.blocks.block1.conv_branch.0.running_var, tiny_unet.u.blocks.block1.conv_branch.0.num_batches_tracked, tiny_unet.u.blocks.block1.conv_branch.2.weight, tiny_unet.u.blocks.block1.conv_branch.3.weight, tiny_unet.u.blocks.block1.conv_branch.3.bias, tiny_unet.u.blocks.block1.conv_branch.3.running_mean, tiny_unet.u.blocks.block1.conv_branch.3.running_var, tiny_unet.u.blocks.block1.conv_branch.3.num_batches_tracked, tiny_unet.u.blocks.block1.conv_branch.5.weight, tiny_unet.deconv.0.weight, tiny_unet.deconv.0.bias, tiny_unet.deconv.0.running_mean, tiny_unet.deconv.0.running_var, tiny_unet.deconv.0.num_batches_tracked, tiny_unet.deconv.2.weight, tiny_unet.blocks_tail.block0.i_branch.0.weight, tiny_unet.blocks_tail.block0.conv_branch.0.weight, tiny_unet.blocks_tail.block0.conv_branch.0.bias, tiny_unet.blocks_tail.block0.conv_branch.0.running_mean, tiny_unet.blocks_tail.block0.conv_branch.0.running_var, tiny_unet.blocks_tail.block0.conv_branch.0.num_batches_tracked, tiny_unet.blocks_tail.block0.conv_branch.2.weight, tiny_unet.blocks_tail.block0.conv_branch.3.weight, tiny_unet.blocks_tail.block0.conv_branch.3.bias, tiny_unet.blocks_tail.block0.conv_branch.3.running_mean, tiny_unet.blocks_tail.block0.conv_branch.3.running_var, tiny_unet.blocks_tail.block0.conv_branch.3.num_batches_tracked, tiny_unet.blocks_tail.block0.conv_branch.5.weight, tiny_unet.blocks_tail.block1.conv_branch.0.weight, tiny_unet.blocks_tail.block1.conv_branch.0.bias, tiny_unet.blocks_tail.block1.conv_branch.0.running_mean, tiny_unet.blocks_tail.block1.conv_branch.0.running_var, tiny_unet.blocks_tail.block1.conv_branch.0.num_batches_tracked, tiny_unet.blocks_tail.block1.conv_branch.2.weight, tiny_unet.blocks_tail.block1.conv_branch.3.weight, tiny_unet.blocks_tail.block1.conv_branch.3.bias, tiny_unet.blocks_tail.block1.conv_branch.3.running_mean, tiny_unet.blocks_tail.block1.conv_branch.3.running_var, tiny_unet.blocks_tail.block1.conv_branch.3.num_batches_tracked, tiny_unet.blocks_tail.block1.conv_branch.5.weight, tiny_unet_outputlayer.0.weight, tiny_unet_outputlayer.0.bias, tiny_unet_outputlayer.0.running_mean, tiny_unet_outputlayer.0.running_var, tiny_unet_outputlayer.0.num_batches_tracked, iou_score_linear.weight, iou_score_linear.bias, mask_linear.0.weight, mask_linear.0.bias, mask_linear.2.weight, mask_linear.2.bias
2022-04-29 16:58:29,210 - INFO - Training
input.spatial_shape
[3588, 3590, 1894]
input.indices.shape
torch.Size([394290, 4])
input.spatial_shape
[3588, 3590, 1894]
input.indices.shape
torch.Size([394290, 4])
input.spatial_shape
[1794, 1795, 947]
input.indices.shape
torch.Size([392313, 4])
input.spatial_shape
[1794, 1795, 947]
input.indices.shape
torch.Size([392313, 4])
input.spatial_shape
[897, 897, 473]
input.indices.shape
torch.Size([245197, 4])
input.spatial_shape
[897, 897, 473]
input.indices.shape
torch.Size([245197, 4])
input.spatial_shape
[448, 448, 236]
input.indices.shape
torch.Size([143955, 4])
input.spatial_shape
[448, 448, 236]
input.indices.shape
torch.Size([143955, 4])
input.spatial_shape
[224, 224, 118]
input.indices.shape
torch.Size([92121, 4])
input.spatial_shape
[224, 224, 118]
input.indices.shape
torch.Size([92121, 4])
input.spatial_shape
[112, 112, 59]
input.indices.shape
torch.Size([34397, 4])
input.spatial_shape
[112, 112, 59]
input.indices.shape
torch.Size([34397, 4])
input.spatial_shape
[56, 56, 29]
input.indices.shape
torch.Size([9026, 4])
input.spatial_shape
[56, 56, 29]
input.indices.shape
torch.Size([9026, 4])
input.spatial_shape
[112, 112, 59]
input.indices.shape
torch.Size([34397, 4])
input.spatial_shape
[112, 112, 59]
input.indices.shape
torch.Size([34397, 4])
input.spatial_shape
[112, 112, 59]
input.indices.shape
torch.Size([34397, 4])
input.spatial_shape
[224, 224, 118]
input.indices.shape
torch.Size([92121, 4])
input.spatial_shape
[224, 224, 118]
input.indices.shape
torch.Size([92121, 4])
input.spatial_shape
[224, 224, 118]
input.indices.shape
torch.Size([92121, 4])
input.spatial_shape
[448, 448, 236]
input.indices.shape
torch.Size([143955, 4])
input.spatial_shape
[448, 448, 236]
input.indices.shape
torch.Size([143955, 4])
input.spatial_shape
[448, 448, 236]
input.indices.shape
torch.Size([143955, 4])
input.spatial_shape
[897, 897, 473]
input.indices.shape
torch.Size([245197, 4])
input.spatial_shape
[897, 897, 473]
input.indices.shape
torch.Size([245197, 4])
input.spatial_shape
[897, 897, 473]
input.indices.shape
torch.Size([245197, 4])
[Exception|implicit_gemm]feat=torch.Size([245197, 96]),w=torch.Size([64, 2, 2, 2, 96]),pair=torch.Size([8, 392313]),act=392313,issubm=False,istrain=True
SPCONV_DEBUG_SAVE_PATH not found, you can specify SPCONV_DEBUG_SAVE_PATH as debug data save path to save debug data which can be attached in a issue.
Traceback (most recent call last):
File "/home/chenyy/project/3D/SoftGroup_tooth (new)/tools/train.py", line 185, in
main()
File "/home/chenyy/project/3D/SoftGroup_tooth (new)/tools/train.py", line 178, in main
train(epoch, model, optimizer, scaler, train_loader, cfg, logger, writer)
File "/home/chenyy/project/3D/SoftGroup_tooth (new)/tools/train.py", line 48, in train
loss, log_vars = model(batch, return_loss=True)
File "/home/chenyy/.conda/envs/softgroup/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/chenyy/project/3D/SoftGroup_tooth (new)/softgroup/model/softgroup.py", line 97, in forward
return self.forward_train(**batch)
File "/home/chenyy/project/3D/SoftGroup_tooth (new)/softgroup/util/utils.py", line 171, in wrapper
return func(*new_args, **new_kwargs)
File "/home/chenyy/project/3D/SoftGroup_tooth (new)/softgroup/model/softgroup.py", line 109, in forward_train
semantic_scores, pt_offsets, output_feats = self.forward_backbone(input, v2p_map)
File "/home/chenyy/project/3D/SoftGroup_tooth (new)/softgroup/model/softgroup.py", line 263, in forward_backbone
output = self.unet(output)
File "/home/chenyy/.conda/envs/softgroup/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/chenyy/project/3D/SoftGroup_tooth (new)/softgroup/model/blocks.py", line 146, in forward
output_decoder = self.u(output_decoder)
File "/home/chenyy/.conda/envs/softgroup/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/chenyy/project/3D/SoftGroup_tooth (new)/softgroup/model/blocks.py", line 147, in forward
output_decoder = self.deconv(output_decoder)
File "/home/chenyy/.conda/envs/softgroup/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/chenyy/.local/lib/python3.8/site-packages/spconv/pytorch/modules.py", line 137, in forward
input = module(input)
File "/home/chenyy/.conda/envs/softgroup/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/chenyy/.local/lib/python3.8/site-packages/spconv/pytorch/conv.py", line 441, in forward
out_features = Fsp.implicit_gemm(
File "/home/chenyy/.conda/envs/softgroup/lib/python3.8/site-packages/torch/cuda/amp/autocast_mode.py", line 219, in decorate_fwd
return fwd(*args, **kwargs)
File "/home/chenyy/.local/lib/python3.8/site-packages/spconv/pytorch/functional.py", line 200, in forward
raise e
File "/home/chenyy/.local/lib/python3.8/site-packages/spconv/pytorch/functional.py", line 185, in forward
out, mask_out, mask_width = ops.implicit_gemm(features, filters,
File "/home/chenyy/.local/lib/python3.8/site-packages/spconv/pytorch/ops.py", line 1103, in implicit_gemm
tune_res, _ = CONV.tune_and_cache(
File "/home/chenyy/.local/lib/python3.8/site-packages/spconv/algo.py", line 661, in tune_and_cache
GemmMainUnitTest.stream_synchronize(stream)
RuntimeError: /home/chenyy/.local/lib/python3.8/site-packages/spconv/build/src/cumm/gemm/main/GemmMainUnitTest/GemmMainUnitTest_stream_synchronize.cc(12)
CUDA error 700

进程已结束,退出代码1

——————————————————————
I think my problem is the same as the author. This is my error, and I print input.spatial_shape and input.indices.shape.
Thank you for your help !

@thangvubk
Copy link
Owner

thangvubk commented Apr 29, 2022

Your input spatial_shape is too big [3588, 3590, 1894]. It is also weird that your spatial shape larger than 512. Did you make changes in dataset.

@Atopis
Copy link

Atopis commented Apr 29, 2022

I think I didn't make changes in dataset.
My dataset's format: xyzrgbaaaa. The preprocess only gain the xyz[0:3], rgb[3:6], so I think the 'aaaa' is no influence, it is?
Can you give me some advice to change it small? or I need to pay attention to which part?

@thangvubk
Copy link
Owner

I think your using default voxel size of 0.02 cm is too small for your data. That leading to spatial_shape is too big. Please check the config for new dataset stpls3d here for tips on new dataset.

@SijanNeupane49
Copy link
Author

SijanNeupane49 commented May 2, 2022

Thank you for your reply. I was able to successfully to fine tune and train model from frozen backbone on my dataset for the first time. I changed the following in both config files:

data:
  train:
    type: 's3dis'
    data_root: 'dataset/s3dis/preprocess_mint' #changed, original 'dataset/s3dis/preprocess'
    prefix: ['Area_1'] # changed, original ['Area_1', 'Area_2', 'Area_3', 'Area_4', 'Area_6']
    suffix: '_inst_nostuff.pth'
    repeat: 20
    training: True
    voxel_cfg:
      scale: 10 #changed, original 50
      spatial_shape: [128, 512]
      max_npoint: 250000
      min_npoint: 5000

However, while training the model from frozen backbone no epochs were saved. But while fine tuning my dataset every second epochs were saved. Then I had run the training from frozen backbone again. But, I got value error as following:

2022-05-02 12:58:35,719 - INFO - Distributed: False
2022-05-02 12:58:35,719 - INFO - Mix precision training: False
2022-05-02 12:58:36,926 - INFO - Load train dataset: 640 scans
2022-05-02 12:58:36,926 - INFO - Load test dataset: 29 scans
Traceback (most recent call last):
  File "./tools/train.py", line 185, in <module>
    main()
  File "./tools/train.py", line 164, in main
    optimizer = build_optimizer(model, cfg.optimizer)
  File "/home/shrijan_pf/SoftGroup/softgroup/util/optim.py", line 9, in build_optimizer
    return optim(filter(lambda p: p.requires_grad, model.parameters()), **_optim_cfg)
  File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/optim/adam.py", line 81, in __init__
    super(Adam, self).__init__(params, defaults)
  File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/optim/optimizer.py", line 49, in __init__
    raise ValueError("optimizer got an empty parameter list")
ValueError: optimizer got an empty parameter list
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 23234) of binary: /home/shrijan_pf/anaconda3/envs/softgroup2/bin/python
Traceback (most recent call last):
  File "/home/shrijan_pf/anaconda3/envs/softgroup2/bin/torchrun", line 33, in <module>
    sys.exit(load_entry_point('torch==1.11.0', 'console_scripts', 'torchrun')())
  File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 345, in wrapper
    return f(*args, **kwargs)
  File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/distributed/run.py", line 724, in main
    run(args)
  File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/distributed/run.py", line 718, in run
    )(*cmd_args)
  File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 247, in launch_agent
    failures=result.failures,
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
./tools/train.py FAILED
------------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2022-05-02_12:58:40
  host      : BQ-DX1100-CT2
  rank      : 0 (local_rank: 0)
  exitcode  : 1 (pid: 23234)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

@thangvubk
Copy link
Owner

The error indicates that all your model is frozen such that the optimizer does not have any model parameters to be optimized.

@theshapguy
Copy link

theshapguy commented May 8, 2022

What could be the possible cause of this? I'm also getting the same error when I train with my own dataset. When I train with S3DIS Dataset I do not get this issue

@thangvubk
Copy link
Owner

@theshapguy can you share the config file.

@SijanNeupane49
Copy link
Author

SijanNeupane49 commented May 8, 2022

This is my config file for softgroup_s3dis_fold5_mintA.yaml

 model:
  channels: 32
  num_blocks: 3
  semantic_classes: 3 #changed, original 13
  instance_classes: 3 #changed, original 13
  sem2ins_classes: []
  semantic_only: True
  ignore_label: -100
  grouping_cfg:
    score_thr: 0.2
    radius: 0.04
    mean_active: 300
    class_numpoint_mean: [1823, 7457, 6189]
    npoint_thr: 0.05  # absolute if class_numpoint == -1, relative if class_numpoint != -1
    ignore_classes: [-99]
  instance_voxel_cfg:
    scale: 50
    spatial_shape: 20
  train_cfg:
    max_proposal_num: 200
    pos_iou_thr: 0.5
  test_cfg:
    x4_split: True
    cls_score_thr: 0.001
    mask_score_thr: -0.5
    min_npoint: 100
  fixed_modules: []

data:
  train:
    type: 's3dis'
    data_root: 'dataset/s3dis/preprocess_mint' # changed, original 'dataset/s3dis/preprocess'
    prefix: ['Area_1'] # changed, original ['Area_1', 'Area_2', 'Area_3', 'Area_4', 'Area_6']
    suffix: '_inst_nostuff.pth'
    repeat: 20
    training: True
    voxel_cfg:
      scale: 10 # changed, original 50
      spatial_shape: [128, 512]
      max_npoint: 250000
      min_npoint: 5000
  test:
    type: 's3dis'
    data_root: 'dataset/s3dis/preprocess_mint' # changed, original 'dataset/s3dis/preprocess'
    prefix: 'Area_2' #changed, original 'Area_5'
    suffix: '_inst_nostuff.pth'
    training: False
    voxel_cfg:
      scale: 50
      spatial_shape: [128, 512]
      max_npoint: 250000
      min_npoint: 5000

dataloader:
  train:
    batch_size: 2 #changed; original was 4
    num_workers: 4
  test:
    batch_size: 1 #changed; original was 1
    num_workers: 1

optimizer:
  type: 'Adam'
  lr: 0.004

save_cfg:
  semantic: True
  offset: True
  instance: False

fp16: False
epochs: 5
step_epoch: 0
save_freq: 2
pretrain: 'work_dirs/softgroup_s3dis_backbone_fold5_mintA/latest.pth' # this is the file generated from the finetune pretrained HAIS point-wise prediction network (backbone) on my dataset
work_dir: ''

Also it would be great if you could explain what spatial_shape inside voxel_cfg is? Do I need to change this for custom dataset. And the same for

grouping_cfg -> class_numpoint_mean

I think once I understand the config file better, I'll could easily solve my problem

FYI: I am Getting the above error when I train on the frozen backbone only, fine tuning part is fine.
Custom_Dataset: https://drive.google.com/file/d/1yn41B0JD6bSx0pJs1mtqZSqMZR2kmo23/view?usp=sharing

@thangvubk
Copy link
Owner

The spatial shape is the [min, max] dimension of the cropped scan in terms of voxel. See here and here. It is weird that your model does not have any parameter. I notice that the model is not run in distributed mode. Could you run the model in distributed mode using ./dist_train.sh with NUM_GPU = 1 and see if the problem happens again?

@thangvubk
Copy link
Owner

I think the problem is that you are not running in distributed mode. It is fixed in the latest commit.

@SijanNeupane49
Copy link
Author

When I do distributed training I get following error:
RuntimeError: DistributedDataParallel is not needed when a module doesn't have any parameter that requires a gradient.
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 13536) of binary: /home/shrijan_pf/anaconda3/envs/softgroup2/bin/python

(softgroup2) shrijan_pf@BQ-DX1100-CT2:~/SoftGroup$ ./tools/dist_train.sh configs/softgroup_s3dis_fold5_mintA.yaml 1 --skip_validate
2022-05-01 13:11:49,819 - INFO - Config:
model:
  channels: 32
  num_blocks: 3
  semantic_classes: 3 #changed, original 13
  instance_classes: 3 #changed, original 13
  sem2ins_classes: []
  semantic_only: True
  ignore_label: -100
  grouping_cfg:
    score_thr: 0.2
    radius: 0.04
    mean_active: 300
    class_numpoint_mean: [1823, 7457, 6189]
    npoint_thr: 0.05  # absolute if class_numpoint == -1, relative if class_numpoint != -1
    ignore_classes: [-99]
  instance_voxel_cfg:
    scale: 50
    spatial_shape: 20
  train_cfg:
    max_proposal_num: 200
    pos_iou_thr: 0.5
  test_cfg:
    x4_split: True
    cls_score_thr: 0.001
    mask_score_thr: -0.5
    min_npoint: 100
  fixed_modules: ['input_conv', 'unet', 'output_layer', 'semantic_linear', 'offset_linear']

data:
  train:
    type: 's3dis'
    data_root: 'dataset/s3dis/preprocess_mint' #changed, original 'dataset/s3dis/preprocess'
    prefix: ['Area_1'] # changed, original ['Area_1', 'Area_2', 'Area_3', 'Area_4', 'Area_6']
    suffix: '_inst_nostuff.pth'
    repeat: 20
    training: True
    voxel_cfg:
      scale: 10 #changed, original 50
      spatial_shape: [128, 512]
      max_npoint: 250000
      min_npoint: 5000
  test:
    type: 's3dis'
    data_root: 'dataset/s3dis/preprocess_mint' #changed, original 'dataset/s3dis/preprocess'
    prefix: 'Area_2' #changed, original 'Area_5'
    suffix: '_inst_nostuff.pth'
    training: False
    voxel_cfg:
      scale: 50
      spatial_shape: [128, 512]
      max_npoint: 250000
      min_npoint: 5000

dataloader:
  train:
    batch_size: 2 #changed; original was 4
    num_workers: 4
  test:
    batch_size: 1 #changed; original was 1
    num_workers: 1

optimizer:
  type: 'Adam'
  lr: 0.004

save_cfg:
  semantic: True
  offset: True
  instance: False

fp16: False
epochs: 5
step_epoch: 0
save_freq: 2
pretrain: 'work_dirs/softgroup_s3dis_backbone_fold5_mintA/latest.pth'
work_dir: ''

2022-05-01 13:11:49,819 - INFO - Distributed: True
2022-05-01 13:11:49,819 - INFO - Mix precision training: False
Traceback (most recent call last):
  File "./tools/train.py", line 185, in <module>
    main()
  File "./tools/train.py", line 153, in main
    model = DistributedDataParallel(model, device_ids=[torch.cuda.current_device()])
  File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 542, in __init__
    "DistributedDataParallel is not needed when a module "
  File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 674, in _log_and_throw
    raise err_type(err_msg)
RuntimeError: DistributedDataParallel is not needed when a module doesn't have any parameter that requires a gradient.
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 13536) of binary: /home/shrijan_pf/anaconda3/envs/softgroup2/bin/python
Traceback (most recent call last):
  File "/home/shrijan_pf/anaconda3/envs/softgroup2/bin/torchrun", line 33, in <module>
    sys.exit(load_entry_point('torch==1.11.0', 'console_scripts', 'torchrun')())
  File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 345, in wrapper
    return f(*args, **kwargs)
  File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/distributed/run.py", line 724, in main
    run(args)
  File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/distributed/run.py", line 718, in run
    )(*cmd_args)
  File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 247, in launch_agent
    failures=result.failures,
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
./tools/train.py FAILED
------------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2022-05-01_13:11:54
  host      : BQ-DX1100-CT2
  rank      : 0 (local_rank: 0)
  exitcode  : 1 (pid: 13536)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

@SijanNeupane49
Copy link
Author

When I change NUM_GPU = 1 in dist_train.sh I get the following error:

#!/usr/bin/env bash
CONFIG=$1
GPUS=$1
PORT=${PORT:-29500}

OMP_NUM_THREADS=1 torchrun --nproc_per_node=$GPUS --master_port=$PORT $(dirname "$0")/train.py --dist $CONFIG ${@:3}
(softgroup2) shrijan_pf@BQ-DX1100-CT2:~/SoftGroup$ ./tools/dist_train.sh configs/softgroup_s3dis_fold5_mintA.yaml  --skip_validate
ERROR:torch.distributed.elastic.multiprocessing.errors.error_handler:{
  "message": {
    "message": "ValueError: Unsupported nproc_per_node value: --skip_validate",
    "extraInfo": {
      "py_callstack": "Traceback (most recent call last):\n  File \"/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/distributed/run.py\", line 569, in determine_local_world_size\n    return int(nproc_per_node)\nValueError: invalid literal for int() with base 10: '--skip_validate'\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File \"/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py\", line 345, in wrapper\n    return f(*args, **kwargs)\n  File \"/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/distributed/run.py\", line 724, in main\n    run(args)\n  File \"/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/distributed/run.py\", line 714, in run\n    config, cmd, cmd_args = config_from_args(args)\n  File \"/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/distributed/run.py\", line 622, in config_from_args\n    nproc_per_node = determine_local_world_size(args.nproc_per_node)\n  File \"/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/distributed/run.py\", line 587, in determine_local_world_size\n    raise ValueError(f\"Unsupported nproc_per_node value: {nproc_per_node}\")\nValueError: Unsupported nproc_per_node value: --skip_validate\n",
      "timestamp": "1651404767"
    }
  }
}
Traceback (most recent call last):
  File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/distributed/run.py", line 569, in determine_local_world_size
    return int(nproc_per_node)
ValueError: invalid literal for int() with base 10: '--skip_validate'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/shrijan_pf/anaconda3/envs/softgroup2/bin/torchrun", line 33, in <module>
    sys.exit(load_entry_point('torch==1.11.0', 'console_scripts', 'torchrun')())
  File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 345, in wrapper
    return f(*args, **kwargs)
  File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/distributed/run.py", line 724, in main
    run(args)
  File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/distributed/run.py", line 714, in run
    config, cmd, cmd_args = config_from_args(args)
  File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/distributed/run.py", line 622, in config_from_args
    nproc_per_node = determine_local_world_size(args.nproc_per_node)
  File "/home/shrijan_pf/anaconda3/envs/softgroup2/lib/python3.7/site-packages/torch/distributed/run.py", line 587, in determine_local_world_size
    raise ValueError(f"Unsupported nproc_per_node value: {nproc_per_node}")
ValueError: Unsupported nproc_per_node value: --skip_validate

@thangvubk
Copy link
Owner

You need to retrain the pretrain model. Your pretrained model doesnot correct.

@SijanNeupane49
Copy link
Author

The following is how I got my pretrain model "work_dirs/softgroup_s3dis_backbone_fold5_mintA/latest.pth". Should I change something in my config file to retrain the model?

model:
  channels: 32
  num_blocks: 3
  semantic_classes: 3 #changed, original 13
  instance_classes: 3 #changed, original 13
  sem2ins_classes: []
  semantic_only: True
  ignore_label: -100
  grouping_cfg:
    score_thr: 0.2
    radius: 0.04
    mean_active: 300
    class_numpoint_mean: [1823, 7457, 6189]
    npoint_thr: 0.05  # absolute if class_numpoint == -1, relative if class_numpoint != -1
    ignore_classes: [-99]
  instance_voxel_cfg:
    scale: 50
    spatial_shape: 20
  train_cfg:
    max_proposal_num: 200
    pos_iou_thr: 0.5
  test_cfg:
    x4_split: True
    cls_score_thr: 0.001
    mask_score_thr: -0.5
    min_npoint: 100
  fixed_modules: model:
  channels: 32
  num_blocks: 3
  semantic_classes: 3 #changed, original 13
  instance_classes: 3 #changed, original 13
  sem2ins_classes: []
  semantic_only: True
  ignore_label: -100
  grouping_cfg:
    score_thr: 0.2
    radius: 0.04
    mean_active: 300
    class_numpoint_mean: [1823, 7457, 6189]
    npoint_thr: 0.05  # absolute if class_numpoint == -1, relative if class_numpoint != -1
    ignore_classes: [-99]
  instance_voxel_cfg:
    scale: 50
    spatial_shape: 20
  train_cfg:
    max_proposal_num: 200
    pos_iou_thr: 0.5
  test_cfg:
    x4_split: True
    cls_score_thr: 0.001
    mask_score_thr: -0.5
    min_npoint: 100
  fixed_modules: []

data:
  train:
    type: 's3dis'
    data_root: 'dataset/s3dis/preprocess_mint' #changed original 'dataset/s3dis/preprocess'
    prefix: ['Area_1'] # changed original ['Area_1', 'Area_2', 'Area_3', 'Area_4', 'Area_6']
    suffix: '_inst_nostuff.pth'
    repeat: 20
    training: True
    voxel_cfg:
      scale: 10 #changed, original 50
      spatial_shape: [128, 512]
      max_npoint: 250000
      min_npoint: 5000
  test:
    type: 's3dis'
    data_root: 'dataset/s3dis/preprocess_mint' #changed original 'dataset/s3dis/preprocess'
    prefix: 'Area_2' #changed, original 'Area_5'
    suffix: '_inst_nostuff.pth'
    training: False
    voxel_cfg:
      scale: 50
      spatial_shape: [128, 512]
      max_npoint: 250000
      min_npoint: 5000

dataloader:
  train:
    batch_size: 2 #changed;original was 4 #worked on 2
    num_workers: 4 #changed, original was 4
  test:
    batch_size: 1
    num_workers: 1

optimizer:
  type: 'Adam'
  lr: 0.004

save_cfg:
  semantic: True
  offset: True
  instance: False

fp16: False
epochs: 5
step_epoch: 0
save_freq: 2
pretrain: './hais_ckpt_spconv2.pth'
work_dir: ''



2022-04-30 21:46:20,675 - INFO - Distributed: True
2022-04-30 21:46:20,675 - INFO - Mix precision training: False
2022-04-30 21:46:21,909 - INFO - Load train dataset: 640 scans
2022-04-30 21:46:21,910 - INFO - Load test dataset: 29 scans
2022-04-30 21:46:21,910 - INFO - Load pretrain from ./hais_ckpt_spconv2.pth
2022-04-30 21:46:22,128 - INFO - removed keys in source state_dict due to size mismatch: semantic_linear.3.weight, semantic_linear.3.bias
2022-04-30 21:46:22,128 - INFO - missing keys in source state_dict: semantic_linear.3.weight, semantic_linear.3.bias
2022-04-30 21:46:22,128 - INFO - unexpected key in source state_dict: tiny_unet.blocks.block0.conv_branch.0.weight, tiny_unet.blocks.block0.conv_branch.0.bias, tiny_unet.blocks.block0.conv_branch.0.running_mean, tiny_unet.blocks.block0.conv_branch.0.running_var, tiny_unet.blocks.block0.conv_branch.0.num_batches_tracked, tiny_unet.blocks.block0.conv_branch.2.weight, tiny_unet.blocks.block0.conv_branch.3.weight, tiny_unet.blocks.block0.conv_branch.3.bias, tiny_unet.blocks.block0.conv_branch.3.running_mean, tiny_unet.blocks.block0.conv_branch.3.running_var, tiny_unet.blocks.block0.conv_branch.3.num_batches_tracked, tiny_unet.blocks.block0.conv_branch.5.weight, tiny_unet.blocks.block1.conv_branch.0.weight, tiny_unet.blocks.block1.conv_branch.0.bias, tiny_unet.blocks.block1.conv_branch.0.running_mean, tiny_unet.blocks.block1.conv_branch.0.running_var, tiny_unet.blocks.block1.conv_branch.0.num_batches_tracked, tiny_unet.blocks.block1.conv_branch.2.weight, tiny_unet.blocks.block1.conv_branch.3.weight, tiny_unet.blocks.block1.conv_branch.3.bias, tiny_unet.blocks.block1.conv_branch.3.running_mean, tiny_unet.blocks.block1.conv_branch.3.running_var, tiny_unet.blocks.block1.conv_branch.3.num_batches_tracked, tiny_unet.blocks.block1.conv_branch.5.weight, tiny_unet.conv.0.weight, tiny_unet.conv.0.bias, tiny_unet.conv.0.running_mean, tiny_unet.conv.0.running_var, tiny_unet.conv.0.num_batches_tracked, tiny_unet.conv.2.weight, tiny_unet.u.blocks.block0.conv_branch.0.weight, tiny_unet.u.blocks.block0.conv_branch.0.bias, tiny_unet.u.blocks.block0.conv_branch.0.running_mean, tiny_unet.u.blocks.block0.conv_branch.0.running_var, tiny_unet.u.blocks.block0.conv_branch.0.num_batches_tracked, tiny_unet.u.blocks.block0.conv_branch.2.weight, tiny_unet.u.blocks.block0.conv_branch.3.weight, tiny_unet.u.blocks.block0.conv_branch.3.bias, tiny_unet.u.blocks.block0.conv_branch.3.running_mean, tiny_unet.u.blocks.block0.conv_branch.3.running_var, tiny_unet.u.blocks.block0.conv_branch.3.num_batches_tracked, tiny_unet.u.blocks.block0.conv_branch.5.weight, tiny_unet.u.blocks.block1.conv_branch.0.weight, tiny_unet.u.blocks.block1.conv_branch.0.bias, tiny_unet.u.blocks.block1.conv_branch.0.running_mean, tiny_unet.u.blocks.block1.conv_branch.0.running_var, tiny_unet.u.blocks.block1.conv_branch.0.num_batches_tracked, tiny_unet.u.blocks.block1.conv_branch.2.weight, tiny_unet.u.blocks.block1.conv_branch.3.weight, tiny_unet.u.blocks.block1.conv_branch.3.bias, tiny_unet.u.blocks.block1.conv_branch.3.running_mean, tiny_unet.u.blocks.block1.conv_branch.3.running_var, tiny_unet.u.blocks.block1.conv_branch.3.num_batches_tracked, tiny_unet.u.blocks.block1.conv_branch.5.weight, tiny_unet.deconv.0.weight, tiny_unet.deconv.0.bias, tiny_unet.deconv.0.running_mean, tiny_unet.deconv.0.running_var, tiny_unet.deconv.0.num_batches_tracked, tiny_unet.deconv.2.weight, tiny_unet.blocks_tail.block0.i_branch.0.weight, tiny_unet.blocks_tail.block0.conv_branch.0.weight, tiny_unet.blocks_tail.block0.conv_branch.0.bias, tiny_unet.blocks_tail.block0.conv_branch.0.running_mean, tiny_unet.blocks_tail.block0.conv_branch.0.running_var, tiny_unet.blocks_tail.block0.conv_branch.0.num_batches_tracked, tiny_unet.blocks_tail.block0.conv_branch.2.weight, tiny_unet.blocks_tail.block0.conv_branch.3.weight, tiny_unet.blocks_tail.block0.conv_branch.3.bias, tiny_unet.blocks_tail.block0.conv_branch.3.running_mean, tiny_unet.blocks_tail.block0.conv_branch.3.running_var, tiny_unet.blocks_tail.block0.conv_branch.3.num_batches_tracked, tiny_unet.blocks_tail.block0.conv_branch.5.weight, tiny_unet.blocks_tail.block1.conv_branch.0.weight, tiny_unet.blocks_tail.block1.conv_branch.0.bias, tiny_unet.blocks_tail.block1.conv_branch.0.running_mean, tiny_unet.blocks_tail.block1.conv_branch.0.running_var, tiny_unet.blocks_tail.block1.conv_branch.0.num_batches_tracked, tiny_unet.blocks_tail.block1.conv_branch.2.weight, tiny_unet.blocks_tail.block1.conv_branch.3.weight, tiny_unet.blocks_tail.block1.conv_branch.3.bias, tiny_unet.blocks_tail.block1.conv_branch.3.running_mean, tiny_unet.blocks_tail.block1.conv_branch.3.running_var, tiny_unet.blocks_tail.block1.conv_branch.3.num_batches_tracked, tiny_unet.blocks_tail.block1.conv_branch.5.weight, tiny_unet_outputlayer.0.weight, tiny_unet_outputlayer.0.bias, tiny_unet_outputlayer.0.running_mean, tiny_unet_outputlayer.0.running_var, tiny_unet_outputlayer.0.num_batches_tracked, iou_score_linear.weight, iou_score_linear.bias, mask_linear.0.weight, mask_linear.0.bias, mask_linear.2.weight, mask_linear.2.bias, unet.u.u.conv.0.weight, unet.u.u.conv.0.bias, unet.u.u.conv.0.running_mean, unet.u.u.conv.0.running_var, unet.u.u.conv.0.num_batches_tracked, unet.u.u.conv.2.weight, unet.u.u.u.blocks.block0.conv_branch.0.weight, unet.u.u.u.blocks.block0.conv_branch.0.bias, unet.u.u.u.blocks.block0.conv_branch.0.running_mean, unet.u.u.u.blocks.block0.conv_branch.0.running_var, unet.u.u.u.blocks.block0.conv_branch.0.num_batches_tracked, unet.u.u.u.blocks.block0.conv_branch.2.weight, unet.u.u.u.blocks.block0.conv_branch.3.weight, unet.u.u.u.blocks.block0.conv_branch.3.bias, unet.u.u.u.blocks.block0.conv_branch.3.running_mean, unet.u.u.u.blocks.block0.conv_branch.3.running_var, unet.u.u.u.blocks.block0.conv_branch.3.num_batches_tracked, unet.u.u.u.blocks.block0.conv_branch.5.weight, unet.u.u.u.blocks.block1.conv_branch.0.weight, unet.u.u.u.blocks.block1.conv_branch.0.bias, unet.u.u.u.blocks.block1.conv_branch.0.running_mean, unet.u.u.u.blocks.block1.conv_branch.0.running_var, unet.u.u.u.blocks.block1.conv_branch.0.num_batches_tracked, unet.u.u.u.blocks.block1.conv_branch.2.weight, unet.u.u.u.blocks.block1.conv_branch.3.weight, unet.u.u.u.blocks.block1.conv_branch.3.bias, unet.u.u.u.blocks.block1.conv_branch.3.running_mean, unet.u.u.u.blocks.block1.conv_branch.3.running_var, unet.u.u.u.blocks.block1.conv_branch.3.num_batches_tracked, unet.u.u.u.blocks.block1.conv_branch.5.weight, unet.u.u.u.conv.0.weight, unet.u.u.u.conv.0.bias, unet.u.u.u.conv.0.running_mean, unet.u.u.u.conv.0.running_var, unet.u.u.u.conv.0.num_batches_tracked, unet.u.u.u.conv.2.weight, unet.u.u.u.u.blocks.block0.conv_branch.0.weight, unet.u.u.u.u.blocks.block0.conv_branch.0.bias, unet.u.u.u.u.blocks.block0.conv_branch.0.running_mean, unet.u.u.u.u.blocks.block0.conv_branch.0.running_var, unet.u.u.u.u.blocks.block0.conv_branch.0.num_batches_tracked, unet.u.u.u.u.blocks.block0.conv_branch.2.weight, unet.u.u.u.u.blocks.block0.conv_branch.3.weight, unet.u.u.u.u.blocks.block0.conv_branch.3.bias, unet.u.u.u.u.blocks.block0.conv_branch.3.running_mean, unet.u.u.u.u.blocks.block0.conv_branch.3.running_var, unet.u.u.u.u.blocks.block0.conv_branch.3.num_batches_tracked, unet.u.u.u.u.blocks.block0.conv_branch.5.weight, unet.u.u.u.u.blocks.block1.conv_branch.0.weight, unet.u.u.u.u.blocks.block1.conv_branch.0.bias, unet.u.u.u.u.blocks.block1.conv_branch.0.running_mean, unet.u.u.u.u.blocks.block1.conv_branch.0.running_var, unet.u.u.u.u.blocks.block1.conv_branch.0.num_batches_tracked, unet.u.u.u.u.blocks.block1.conv_branch.2.weight, unet.u.u.u.u.blocks.block1.conv_branch.3.weight, unet.u.u.u.u.blocks.block1.conv_branch.3.bias, unet.u.u.u.u.blocks.block1.conv_branch.3.running_mean, unet.u.u.u.u.blocks.block1.conv_branch.3.running_var, unet.u.u.u.u.blocks.block1.conv_branch.3.num_batches_tracked, unet.u.u.u.u.blocks.block1.conv_branch.5.weight, unet.u.u.u.u.conv.0.weight, unet.u.u.u.u.conv.0.bias, unet.u.u.u.u.conv.0.running_mean, unet.u.u.u.u.conv.0.running_var, unet.u.u.u.u.conv.0.num_batches_tracked, unet.u.u.u.u.conv.2.weight, unet.u.u.u.u.u.blocks.block0.conv_branch.0.weight, unet.u.u.u.u.u.blocks.block0.conv_branch.0.bias, unet.u.u.u.u.u.blocks.block0.conv_branch.0.running_mean, unet.u.u.u.u.u.blocks.block0.conv_branch.0.running_var, unet.u.u.u.u.u.blocks.block0.conv_branch.0.num_batches_tracked, unet.u.u.u.u.u.blocks.block0.conv_branch.2.weight, unet.u.u.u.u.u.blocks.block0.conv_branch.3.weight, unet.u.u.u.u.u.blocks.block0.conv_branch.3.bias, unet.u.u.u.u.u.blocks.block0.conv_branch.3.running_mean, unet.u.u.u.u.u.blocks.block0.conv_branch.3.running_var, unet.u.u.u.u.u.blocks.block0.conv_branch.3.num_batches_tracked, unet.u.u.u.u.u.blocks.block0.conv_branch.5.weight, unet.u.u.u.u.u.blocks.block1.conv_branch.0.weight, unet.u.u.u.u.u.blocks.block1.conv_branch.0.bias, unet.u.u.u.u.u.blocks.block1.conv_branch.0.running_mean, unet.u.u.u.u.u.blocks.block1.conv_branch.0.running_var, unet.u.u.u.u.u.blocks.block1.conv_branch.0.num_batches_tracked, unet.u.u.u.u.u.blocks.block1.conv_branch.2.weight, unet.u.u.u.u.u.blocks.block1.conv_branch.3.weight, unet.u.u.u.u.u.blocks.block1.conv_branch.3.bias, unet.u.u.u.u.u.blocks.block1.conv_branch.3.running_mean, unet.u.u.u.u.u.blocks.block1.conv_branch.3.running_var, unet.u.u.u.u.u.blocks.block1.conv_branch.3.num_batches_tracked, unet.u.u.u.u.u.blocks.block1.conv_branch.5.weight, unet.u.u.u.u.u.conv.0.weight, unet.u.u.u.u.u.conv.0.bias, unet.u.u.u.u.u.conv.0.running_mean, unet.u.u.u.u.u.conv.0.running_var, unet.u.u.u.u.u.conv.0.num_batches_tracked, unet.u.u.u.u.u.conv.2.weight, unet.u.u.u.u.u.u.blocks.block0.conv_branch.0.weight, unet.u.u.u.u.u.u.blocks.block0.conv_branch.0.bias, unet.u.u.u.u.u.u.blocks.block0.conv_branch.0.running_mean, unet.u.u.u.u.u.u.blocks.block0.conv_branch.0.running_var, unet.u.u.u.u.u.u.blocks.block0.conv_branch.0.num_batches_tracked, unet.u.u.u.u.u.u.blocks.block0.conv_branch.2.weight, unet.u.u.u.u.u.u.blocks.block0.conv_branch.3.weight, unet.u.u.u.u.u.u.blocks.block0.conv_branch.3.bias, unet.u.u.u.u.u.u.blocks.block0.conv_branch.3.running_mean, unet.u.u.u.u.u.u.blocks.block0.conv_branch.3.running_var, unet.u.u.u.u.u.u.blocks.block0.conv_branch.3.num_batches_tracked, unet.u.u.u.u.u.u.blocks.block0.conv_branch.5.weight, unet.u.u.u.u.u.u.blocks.block1.conv_branch.0.weight, unet.u.u.u.u.u.u.blocks.block1.conv_branch.0.bias, unet.u.u.u.u.u.u.blocks.block1.conv_branch.0.running_mean, unet.u.u.u.u.u.u.blocks.block1.conv_branch.0.running_var, unet.u.u.u.u.u.u.blocks.block1.conv_branch.0.num_batches_tracked, unet.u.u.u.u.u.u.blocks.block1.conv_branch.2.weight, unet.u.u.u.u.u.u.blocks.block1.conv_branch.3.weight, unet.u.u.u.u.u.u.blocks.block1.conv_branch.3.bias, unet.u.u.u.u.u.u.blocks.block1.conv_branch.3.running_mean, unet.u.u.u.u.u.u.blocks.block1.conv_branch.3.running_var, unet.u.u.u.u.u.u.blocks.block1.conv_branch.3.num_batches_tracked, unet.u.u.u.u.u.u.blocks.block1.conv_branch.5.weight, unet.u.u.u.u.u.deconv.0.weight, unet.u.u.u.u.u.deconv.0.bias, unet.u.u.u.u.u.deconv.0.running_mean, unet.u.u.u.u.u.deconv.0.running_var, unet.u.u.u.u.u.deconv.0.num_batches_tracked, unet.u.u.u.u.u.deconv.2.weight, unet.u.u.u.u.u.blocks_tail.block0.i_branch.0.weight, unet.u.u.u.u.u.blocks_tail.block0.conv_branch.0.weight, unet.u.u.u.u.u.blocks_tail.block0.conv_branch.0.bias, unet.u.u.u.u.u.blocks_tail.block0.conv_branch.0.running_mean, unet.u.u.u.u.u.blocks_tail.block0.conv_branch.0.running_var, unet.u.u.u.u.u.blocks_tail.block0.conv_branch.0.num_batches_tracked, unet.u.u.u.u.u.blocks_tail.block0.conv_branch.2.weight, unet.u.u.u.u.u.blocks_tail.block0.conv_branch.3.weight, unet.u.u.u.u.u.blocks_tail.block0.conv_branch.3.bias, unet.u.u.u.u.u.blocks_tail.block0.conv_branch.3.running_mean, unet.u.u.u.u.u.blocks_tail.block0.conv_branch.3.running_var, unet.u.u.u.u.u.blocks_tail.block0.conv_branch.3.num_batches_tracked, unet.u.u.u.u.u.blocks_tail.block0.conv_branch.5.weight, unet.u.u.u.u.u.blocks_tail.block1.conv_branch.0.weight, unet.u.u.u.u.u.blocks_tail.block1.conv_branch.0.bias, unet.u.u.u.u.u.blocks_tail.block1.conv_branch.0.running_mean, unet.u.u.u.u.u.blocks_tail.block1.conv_branch.0.running_var, unet.u.u.u.u.u.blocks_tail.block1.conv_branch.0.num_batches_tracked, unet.u.u.u.u.u.blocks_tail.block1.conv_branch.2.weight, unet.u.u.u.u.u.blocks_tail.block1.conv_branch.3.weight, unet.u.u.u.u.u.blocks_tail.block1.conv_branch.3.bias, unet.u.u.u.u.u.blocks_tail.block1.conv_branch.3.running_mean, unet.u.u.u.u.u.blocks_tail.block1.conv_branch.3.running_var, unet.u.u.u.u.u.blocks_tail.block1.conv_branch.3.num_batches_tracked, unet.u.u.u.u.u.blocks_tail.block1.conv_branch.5.weight, unet.u.u.u.u.deconv.0.weight, unet.u.u.u.u.deconv.0.bias, unet.u.u.u.u.deconv.0.running_mean, unet.u.u.u.u.deconv.0.running_var, unet.u.u.u.u.deconv.0.num_batches_tracked, unet.u.u.u.u.deconv.2.weight, unet.u.u.u.u.blocks_tail.block0.i_branch.0.weight, unet.u.u.u.u.blocks_tail.block0.conv_branch.0.weight, unet.u.u.u.u.blocks_tail.block0.conv_branch.0.bias, unet.u.u.u.u.blocks_tail.block0.conv_branch.0.running_mean, unet.u.u.u.u.blocks_tail.block0.conv_branch.0.running_var, unet.u.u.u.u.blocks_tail.block0.conv_branch.0.num_batches_tracked, unet.u.u.u.u.blocks_tail.block0.conv_branch.2.weight, unet.u.u.u.u.blocks_tail.block0.conv_branch.3.weight, unet.u.u.u.u.blocks_tail.block0.conv_branch.3.bias, unet.u.u.u.u.blocks_tail.block0.conv_branch.3.running_mean, unet.u.u.u.u.blocks_tail.block0.conv_branch.3.running_var, unet.u.u.u.u.blocks_tail.block0.conv_branch.3.num_batches_tracked, unet.u.u.u.u.blocks_tail.block0.conv_branch.5.weight, unet.u.u.u.u.blocks_tail.block1.conv_branch.0.weight, unet.u.u.u.u.blocks_tail.block1.conv_branch.0.bias, unet.u.u.u.u.blocks_tail.block1.conv_branch.0.running_mean, unet.u.u.u.u.blocks_tail.block1.conv_branch.0.running_var, unet.u.u.u.u.blocks_tail.block1.conv_branch.0.num_batches_tracked, unet.u.u.u.u.blocks_tail.block1.conv_branch.2.weight, unet.u.u.u.u.blocks_tail.block1.conv_branch.3.weight, unet.u.u.u.u.blocks_tail.block1.conv_branch.3.bias, unet.u.u.u.u.blocks_tail.block1.conv_branch.3.running_mean, unet.u.u.u.u.blocks_tail.block1.conv_branch.3.running_var, unet.u.u.u.u.blocks_tail.block1.conv_branch.3.num_batches_tracked, unet.u.u.u.u.blocks_tail.block1.conv_branch.5.weight, unet.u.u.u.deconv.0.weight, unet.u.u.u.deconv.0.bias, unet.u.u.u.deconv.0.running_mean, unet.u.u.u.deconv.0.running_var, unet.u.u.u.deconv.0.num_batches_tracked, unet.u.u.u.deconv.2.weight, unet.u.u.u.blocks_tail.block0.i_branch.0.weight, unet.u.u.u.blocks_tail.block0.conv_branch.0.weight, unet.u.u.u.blocks_tail.block0.conv_branch.0.bias, unet.u.u.u.blocks_tail.block0.conv_branch.0.running_mean, unet.u.u.u.blocks_tail.block0.conv_branch.0.running_var, unet.u.u.u.blocks_tail.block0.conv_branch.0.num_batches_tracked, unet.u.u.u.blocks_tail.block0.conv_branch.2.weight, unet.u.u.u.blocks_tail.block0.conv_branch.3.weight, unet.u.u.u.blocks_tail.block0.conv_branch.3.bias, unet.u.u.u.blocks_tail.block0.conv_branch.3.running_mean, unet.u.u.u.blocks_tail.block0.conv_branch.3.running_var, unet.u.u.u.blocks_tail.block0.conv_branch.3.num_batches_tracked, unet.u.u.u.blocks_tail.block0.conv_branch.5.weight, unet.u.u.u.blocks_tail.block1.conv_branch.0.weight, unet.u.u.u.blocks_tail.block1.conv_branch.0.bias, unet.u.u.u.blocks_tail.block1.conv_branch.0.running_mean, unet.u.u.u.blocks_tail.block1.conv_branch.0.running_var, unet.u.u.u.blocks_tail.block1.conv_branch.0.num_batches_tracked, unet.u.u.u.blocks_tail.block1.conv_branch.2.weight, unet.u.u.u.blocks_tail.block1.conv_branch.3.weight, unet.u.u.u.blocks_tail.block1.conv_branch.3.bias, unet.u.u.u.blocks_tail.block1.conv_branch.3.running_mean, unet.u.u.u.blocks_tail.block1.conv_branch.3.running_var, unet.u.u.u.blocks_tail.block1.conv_branch.3.num_batches_tracked, unet.u.u.u.blocks_tail.block1.conv_branch.5.weight, unet.u.u.deconv.0.weight, unet.u.u.deconv.0.bias, unet.u.u.deconv.0.running_mean, unet.u.u.deconv.0.running_var, unet.u.u.deconv.0.num_batches_tracked, unet.u.u.deconv.2.weight, unet.u.u.blocks_tail.block0.i_branch.0.weight, unet.u.u.blocks_tail.block0.conv_branch.0.weight, unet.u.u.blocks_tail.block0.conv_branch.0.bias, unet.u.u.blocks_tail.block0.conv_branch.0.running_mean, unet.u.u.blocks_tail.block0.conv_branch.0.running_var, unet.u.u.blocks_tail.block0.conv_branch.0.num_batches_tracked, unet.u.u.blocks_tail.block0.conv_branch.2.weight, unet.u.u.blocks_tail.block0.conv_branch.3.weight, unet.u.u.blocks_tail.block0.conv_branch.3.bias, unet.u.u.blocks_tail.block0.conv_branch.3.running_mean, unet.u.u.blocks_tail.block0.conv_branch.3.running_var, unet.u.u.blocks_tail.block0.conv_branch.3.num_batches_tracked, unet.u.u.blocks_tail.block0.conv_branch.5.weight, unet.u.u.blocks_tail.block1.conv_branch.0.weight, unet.u.u.blocks_tail.block1.conv_branch.0.bias, unet.u.u.blocks_tail.block1.conv_branch.0.running_mean, unet.u.u.blocks_tail.block1.conv_branch.0.running_var, unet.u.u.blocks_tail.block1.conv_branch.0.num_batches_tracked, unet.u.u.blocks_tail.block1.conv_branch.2.weight, unet.u.u.blocks_tail.block1.conv_branch.3.weight, unet.u.u.blocks_tail.block1.conv_branch.3.bias, unet.u.u.blocks_tail.block1.conv_branch.3.running_mean, unet.u.u.blocks_tail.block1.conv_branch.3.running_var, unet.u.u.blocks_tail.block1.conv_branch.3.num_batches_tracked, unet.u.u.blocks_tail.block1.conv_branch.5.weight
2022-04-30 21:46:22,129 - INFO - Training

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants