You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While testing the RGBDiff model using the command python test_models.py ucf101 RGBDiff /media/sda/nandan/data/ucf101_rgb_val_split_1.txt ucf101_bninception__rgbdiff_checkpoint.pth.tar --arch BNInception --save_scores SCORE_UCF101_1_RGBDIFF --workers=2
I'm getting this error
Traceback (most recent call last):
File "test_models.py", line 130, in
rst = eval_video((i, data, label))
File "test_models.py", line 117, in eval_video
rst = net(input_var).data.cpu().numpy().copy()
File "/home/nandan/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 357, in call
result = self.forward(*input, **kwargs)
File "/home/nandan/anaconda2/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 73, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/nandan/anaconda2/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 83, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/nandan/anaconda2/lib/python2.7/site-packages/torch/nn/parallel/parallel_apply.py", line 67, in parallel_apply
raise output
RuntimeError: cuda runtime error (2) : out of memory at /opt/conda/conda-bld/pytorch_1518238409320/work/torch/lib/THC/generic/THCStorage.cu:58
I'm using two K40 GPU with each global memory capacity 4742MiB.
The text was updated successfully, but these errors were encountered:
@yjxiong : I found that runtime memory problem can be solved either by reducing --test_crops size or by reducing --test_segments size . My question is which one to prefer ? I mean which one won't affect test accuracy ?
While testing the RGBDiff model using the command
python test_models.py ucf101 RGBDiff /media/sda/nandan/data/ucf101_rgb_val_split_1.txt ucf101_bninception__rgbdiff_checkpoint.pth.tar --arch BNInception --save_scores SCORE_UCF101_1_RGBDIFF --workers=2
I'm getting this error
Traceback (most recent call last):
File "test_models.py", line 130, in
rst = eval_video((i, data, label))
File "test_models.py", line 117, in eval_video
rst = net(input_var).data.cpu().numpy().copy()
File "/home/nandan/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 357, in call
result = self.forward(*input, **kwargs)
File "/home/nandan/anaconda2/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 73, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/nandan/anaconda2/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 83, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/nandan/anaconda2/lib/python2.7/site-packages/torch/nn/parallel/parallel_apply.py", line 67, in parallel_apply
raise output
RuntimeError: cuda runtime error (2) : out of memory at /opt/conda/conda-bld/pytorch_1518238409320/work/torch/lib/THC/generic/THCStorage.cu:58
I'm using two K40 GPU with each global memory capacity 4742MiB.
The text was updated successfully, but these errors were encountered: