Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: Can't pickle local object 'TrainAugmentation.__init__.<locals>.<lambda>' #9

Open
CWF-999 opened this issue Oct 25, 2019 · 2 comments

Comments

@CWF-999
Copy link

CWF-999 commented Oct 25, 2019

Can somebody help me?

(py3.6) D:\winfred\mobilenetv3_ssd>python train_ssd.py --dataset_type voc --datasets ./data/VOC2007 ./data/VOC2012 --validation_dataset ./data/test/VOC2007 --net mb3-ssd-lite --pretrained_ssd ./models/mb3-ssd-lite-Epoch-149-Loss-5.782852862012213.pth
2019-10-25 17:22:45,411 - root - INFO - Namespace(balance_data=False, base_net=None, base_net_lr=None, batch_size=32, checkpoint_folder='models/', dataset_type='voc', datasets=['./data/VOC2007', './data/VOC2012'], debug_steps=100, extra_layers_lr=None, freeze_base_net=False, freeze_net=False, gamma=0.1, lr=0.001, mb2_width_mult=1.0, milestones='80,100', momentum=0.9, net='mb3-ssd-lite', num_epochs=120, num_workers=4, pretrained_ssd='./models/mb3-ssd-lite-Epoch-149-Loss-5.782852862012213.pth', resume=None, scheduler='multi-step', t_max=120, use_cuda=True, validation_dataset='./data/test/VOC2007', validation_epochs=5, weight_decay=0.0005)
2019-10-25 17:22:45,413 - root - INFO - Prepare training datasets.
2019-10-25 17:22:45,415 - root - INFO - No labels file, using default VOC classes.
2019-10-25 17:22:45,418 - root - INFO - No labels file, using default VOC classes.
2019-10-25 17:22:45,419 - root - INFO - Stored labels into file models/voc-model-labels.txt.
2019-10-25 17:22:45,420 - root - INFO - Train dataset size: 16551
2019-10-25 17:22:45,420 - root - INFO - Prepare Validation datasets.
2019-10-25 17:22:45,422 - root - INFO - No labels file, using default VOC classes.
2019-10-25 17:22:45,424 - root - INFO - validation dataset size: 4952
2019-10-25 17:22:45,428 - root - INFO - Build network.
SSD(
(base_net): Sequential(
(0): Conv2d(3, 16, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): h_swish()
(3): MobileBlock(
(conv): Sequential(
(0): Conv2d(16, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
)
(depth_conv): Sequential(
(0): Conv2d(16, 16, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=16)
(1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(squeeze_block): SqueezeBlock(
(avg_pool): AdaptiveAvgPool2d(output_size=1)
(dense): Sequential(
(0): Linear(in_features=16, out_features=4, bias=True)
(1): ReLU(inplace)
(2): Linear(in_features=4, out_features=16, bias=True)
(3): h_sigmoid()
)
)
(point_conv): Sequential(
(0): Conv2d(16, 16, kernel_size=(1, 1), stride=(1, 1))
(1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
)
)
(4): MobileBlock(
(conv): Sequential(
(0): Conv2d(16, 72, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(72, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
)
(depth_conv): Sequential(
(0): Conv2d(72, 72, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=72)
(1): BatchNorm2d(72, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(point_conv): Sequential(
(0): Conv2d(72, 24, kernel_size=(1, 1), stride=(1, 1))
(1): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
)
)
(5): MobileBlock(
(conv): Sequential(
(0): Conv2d(24, 88, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(88, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
)
(depth_conv): Sequential(
(0): Conv2d(88, 88, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=88)
(1): BatchNorm2d(88, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(point_conv): Sequential(
(0): Conv2d(88, 24, kernel_size=(1, 1), stride=(1, 1))
(1): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
)
)
(6): MobileBlock(
(conv): Sequential(
(0): Conv2d(24, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
)
(depth_conv): Sequential(
(0): Conv2d(96, 96, kernel_size=(5, 5), stride=(2, 2), padding=(2, 2), groups=96)
(1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(squeeze_block): SqueezeBlock(
(avg_pool): AdaptiveAvgPool2d(output_size=1)
(dense): Sequential(
(0): Linear(in_features=96, out_features=24, bias=True)
(1): ReLU(inplace)
(2): Linear(in_features=24, out_features=96, bias=True)
(3): h_sigmoid()
)
)
(point_conv): Sequential(
(0): Conv2d(96, 40, kernel_size=(1, 1), stride=(1, 1))
(1): BatchNorm2d(40, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
)
)
(7): MobileBlock(
(conv): Sequential(
(0): Conv2d(40, 240, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(240, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
)
(depth_conv): Sequential(
(0): Conv2d(240, 240, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=240)
(1): BatchNorm2d(240, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(squeeze_block): SqueezeBlock(
(avg_pool): AdaptiveAvgPool2d(output_size=1)
(dense): Sequential(
(0): Linear(in_features=240, out_features=60, bias=True)
(1): ReLU(inplace)
(2): Linear(in_features=60, out_features=240, bias=True)
(3): h_sigmoid()
)
)
(point_conv): Sequential(
(0): Conv2d(240, 40, kernel_size=(1, 1), stride=(1, 1))
(1): BatchNorm2d(40, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
)
)
(8): MobileBlock(
(conv): Sequential(
(0): Conv2d(40, 240, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(240, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
)
(depth_conv): Sequential(
(0): Conv2d(240, 240, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=240)
(1): BatchNorm2d(240, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(squeeze_block): SqueezeBlock(
(avg_pool): AdaptiveAvgPool2d(output_size=1)
(dense): Sequential(
(0): Linear(in_features=240, out_features=60, bias=True)
(1): ReLU(inplace)
(2): Linear(in_features=60, out_features=240, bias=True)
(3): h_sigmoid()
)
)
(point_conv): Sequential(
(0): Conv2d(240, 40, kernel_size=(1, 1), stride=(1, 1))
(1): BatchNorm2d(40, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
)
)
(9): MobileBlock(
(conv): Sequential(
(0): Conv2d(40, 120, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(120, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): h_swish()
)
(depth_conv): Sequential(
(0): Conv2d(120, 120, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=120)
(1): BatchNorm2d(120, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(squeeze_block): SqueezeBlock(
(avg_pool): AdaptiveAvgPool2d(output_size=1)
(dense): Sequential(
(0): Linear(in_features=120, out_features=30, bias=True)
(1): ReLU(inplace)
(2): Linear(in_features=30, out_features=120, bias=True)
(3): h_sigmoid()
)
)
(point_conv): Sequential(
(0): Conv2d(120, 48, kernel_size=(1, 1), stride=(1, 1))
(1): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): h_swish()
)
)
(10): MobileBlock(
(conv): Sequential(
(0): Conv2d(48, 144, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): h_swish()
)
(depth_conv): Sequential(
(0): Conv2d(144, 144, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=144)
(1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(squeeze_block): SqueezeBlock(
(avg_pool): AdaptiveAvgPool2d(output_size=1)
(dense): Sequential(
(0): Linear(in_features=144, out_features=36, bias=True)
(1): ReLU(inplace)
(2): Linear(in_features=36, out_features=144, bias=True)
(3): h_sigmoid()
)
)
(point_conv): Sequential(
(0): Conv2d(144, 48, kernel_size=(1, 1), stride=(1, 1))
(1): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): h_swish()
)
)
(11): MobileBlock(
(conv): Sequential(
(0): Conv2d(48, 288, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): h_swish()
)
(depth_conv): Sequential(
(0): Conv2d(288, 288, kernel_size=(5, 5), stride=(2, 2), padding=(2, 2), groups=288)
(1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(squeeze_block): SqueezeBlock(
(avg_pool): AdaptiveAvgPool2d(output_size=1)
(dense): Sequential(
(0): Linear(in_features=288, out_features=72, bias=True)
(1): ReLU(inplace)
(2): Linear(in_features=72, out_features=288, bias=True)
(3): h_sigmoid()
)
)
(point_conv): Sequential(
(0): Conv2d(288, 96, kernel_size=(1, 1), stride=(1, 1))
(1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): h_swish()
)
)
(12): MobileBlock(
(conv): Sequential(
(0): Conv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): h_swish()
)
(depth_conv): Sequential(
(0): Conv2d(576, 576, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=576)
(1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(squeeze_block): SqueezeBlock(
(avg_pool): AdaptiveAvgPool2d(output_size=1)
(dense): Sequential(
(0): Linear(in_features=576, out_features=144, bias=True)
(1): ReLU(inplace)
(2): Linear(in_features=144, out_features=576, bias=True)
(3): h_sigmoid()
)
)
(point_conv): Sequential(
(0): Conv2d(576, 96, kernel_size=(1, 1), stride=(1, 1))
(1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): h_swish()
)
)
(13): MobileBlock(
(conv): Sequential(
(0): Conv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): h_swish()
)
(depth_conv): Sequential(
(0): Conv2d(576, 576, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=576)
(1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(squeeze_block): SqueezeBlock(
(avg_pool): AdaptiveAvgPool2d(output_size=1)
(dense): Sequential(
(0): Linear(in_features=576, out_features=144, bias=True)
(1): ReLU(inplace)
(2): Linear(in_features=144, out_features=576, bias=True)
(3): h_sigmoid()
)
)
(point_conv): Sequential(
(0): Conv2d(576, 96, kernel_size=(1, 1), stride=(1, 1))
(1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): h_swish()
)
)
(14): Conv2d(96, 576, kernel_size=(1, 1), stride=(1, 1))
(15): SqueezeBlock(
(avg_pool): AdaptiveAvgPool2d(output_size=1)
(dense): Sequential(
(0): Linear(in_features=576, out_features=144, bias=True)
(1): ReLU(inplace)
(2): Linear(in_features=144, out_features=576, bias=True)
(3): h_sigmoid()
)
)
(16): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(17): h_swish()
(18): Conv2d(576, 1280, kernel_size=(1, 1), stride=(1, 1))
(19): h_swish()
)
(extras): ModuleList(
(0): InvertedResidual(
(conv): Sequential(
(0): Conv2d(1280, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace)
(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=256, bias=False)
(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU6(inplace)
(6): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(7): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): InvertedResidual(
(conv): Sequential(
(0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace)
(3): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=128, bias=False)
(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU6(inplace)
(6): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(7): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(2): InvertedResidual(
(conv): Sequential(
(0): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace)
(3): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=128, bias=False)
(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU6(inplace)
(6): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(7): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(3): InvertedResidual(
(conv): Sequential(
(0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace)
(3): Conv2d(64, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=64, bias=False)
(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU6(inplace)
(6): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(7): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
(classification_headers): ModuleList(
(0): Sequential(
(0): Conv2d(288, 288, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=288)
(1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6()
(3): Conv2d(288, 126, kernel_size=(1, 1), stride=(1, 1))
)
(1): Sequential(
(0): Conv2d(1280, 1280, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1280)
(1): BatchNorm2d(1280, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6()
(3): Conv2d(1280, 126, kernel_size=(1, 1), stride=(1, 1))
)
(2): Sequential(
(0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=512)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6()
(3): Conv2d(512, 126, kernel_size=(1, 1), stride=(1, 1))
)
(3): Sequential(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6()
(3): Conv2d(256, 126, kernel_size=(1, 1), stride=(1, 1))
)
(4): Sequential(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6()
(3): Conv2d(256, 126, kernel_size=(1, 1), stride=(1, 1))
)
(5): Conv2d(64, 126, kernel_size=(1, 1), stride=(1, 1))
)
(regression_headers): ModuleList(
(0): Sequential(
(0): Conv2d(288, 288, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=288)
(1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6()
(3): Conv2d(288, 24, kernel_size=(1, 1), stride=(1, 1))
)
(1): Sequential(
(0): Conv2d(1280, 1280, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1280)
(1): BatchNorm2d(1280, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6()
(3): Conv2d(1280, 24, kernel_size=(1, 1), stride=(1, 1))
)
(2): Sequential(
(0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=512)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6()
(3): Conv2d(512, 24, kernel_size=(1, 1), stride=(1, 1))
)
(3): Sequential(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6()
(3): Conv2d(256, 24, kernel_size=(1, 1), stride=(1, 1))
)
(4): Sequential(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6()
(3): Conv2d(256, 24, kernel_size=(1, 1), stride=(1, 1))
)
(5): Conv2d(64, 24, kernel_size=(1, 1), stride=(1, 1))
)
(source_layer_add_ons): ModuleList()
)
2019-10-25 17:22:48,105 - root - INFO - Init from pretrained ssd ./models/mb3-ssd-lite-Epoch-149-Loss-5.782852862012213.pth
2019-10-25 17:22:49,881 - root - INFO - Took 1.78 seconds to load the model.
2019-10-25 17:22:50,180 - root - INFO - Learning rate: 0.001, Base net learning rate: 0.001, Extra Layers learning rate: 0.001.
2019-10-25 17:22:50,180 - root - INFO - Uses MultiStepLR scheduler.
2019-10-25 17:22:50,181 - root - INFO - Start training from epoch 0.
Traceback (most recent call last):
File "train_ssd.py", line 338, in
device=DEVICE, debug_steps=args.debug_steps, epoch=epoch)
File "train_ssd.py", line 119, in train
for i, data in enumerate(loader):
File "D:\Anaconda3\envs\py3.6\lib\site-packages\torch\utils\data\dataloader.py", line 819, in iter
return _DataLoaderIter(self)
File "D:\Anaconda3\envs\py3.6\lib\site-packages\torch\utils\data\dataloader.py", line 560, in init
w.start()
File "D:\Anaconda3\envs\py3.6\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "D:\Anaconda3\envs\py3.6\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "D:\Anaconda3\envs\py3.6\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "D:\Anaconda3\envs\py3.6\lib\multiprocessing\popen_spawn_win32.py", line 65, in init
reduction.dump(process_obj, to_child)
File "D:\Anaconda3\envs\py3.6\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'TrainAugmentation.init..'

(py3.6) D:\winfred\mobilenetv3_ssd>Traceback (most recent call last):
File "", line 1, in
File "D:\Anaconda3\envs\py3.6\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "D:\Anaconda3\envs\py3.6\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input

@bolun365
Copy link

I meet the same problem, have you solved it?

@bolun365
Copy link

bolun365 commented Jan 21, 2021

Solved. The multiprocess procedure in windows doesn't call fork like linux, so it need pickle to transmit data from parrent process to child process. The lambda expression in TrainAugmentation isn't supported by pickle.
Using expressions like below:

class LambdaExpressions(object):
    def __init__(self, std):
        self.std = std
    def __call__(self, image, boxes=None, labels=None):
        return (image / self.std, boxes, labels)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants