You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Contructing dataset...
Testing...
Traceback (most recent call last):
File "/media/sumit/New Volume1/RnD/video-super-resolution-master/test.py", line 93, in
main()
File "/media/sumit/New Volume1/RnD/video-super-resolution-master/test.py", line 88, in main
stats, outputs = solver.test(train_dataset, args.model_path)
File "/media/sumit/New Volume1/RnD/video-super-resolution-master/solver.py", line 252, in test
_, _, stats, outputs = self._check_PSNR(dataset, is_test=True)
File "/media/sumit/New Volume1/RnD/video-super-resolution-master/solver.py", line 151, in _check_PSNR
output_batch = self.model(input_batch)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/media/sumit/New Volume1/RnD/video-super-resolution-master/model.py", line 81, in forward
out = self.residual_layer(out)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/media/sumit/New Volume1/RnD/video-super-resolution-master/model.py", line 94, in forward
return self.relu(self.conv(x))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 338, in forward
self.padding, self.dilation, self.groups)
RuntimeError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 3.94 GiB total capacity; 3.12 GiB already allocated; 7.38 MiB free; 48.66 MiB cached)
My GPU is 4GB. Though I changed the batch size to 1 and number of workers to 0, I am getting out of memory error while performing both training and testing. May I know the GPU requirement please? or is there any way that I can process it with my current GPU?
The text was updated successfully, but these errors were encountered:
I preproceesed the data in matlab and generated the model file. Now in python i am getting this error:
############################################################
Video Super Resolution - Pytorch implementation
by Thang Vu (thangvubk@gmail.com
############################################################
-------YOUR SETTINGS_________
model: VRES
model_path: VRES_x3.pt
scale: 3
test_set: IndMya
Contructing dataset...
Testing...
Traceback (most recent call last):
File "/media/sumit/New Volume1/RnD/video-super-resolution-master/test.py", line 93, in
main()
File "/media/sumit/New Volume1/RnD/video-super-resolution-master/test.py", line 88, in main
stats, outputs = solver.test(train_dataset, args.model_path)
File "/media/sumit/New Volume1/RnD/video-super-resolution-master/solver.py", line 252, in test
_, _, stats, outputs = self._check_PSNR(dataset, is_test=True)
File "/media/sumit/New Volume1/RnD/video-super-resolution-master/solver.py", line 151, in _check_PSNR
output_batch = self.model(input_batch)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/media/sumit/New Volume1/RnD/video-super-resolution-master/model.py", line 81, in forward
out = self.residual_layer(out)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/media/sumit/New Volume1/RnD/video-super-resolution-master/model.py", line 94, in forward
return self.relu(self.conv(x))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 338, in forward
self.padding, self.dilation, self.groups)
RuntimeError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 3.94 GiB total capacity; 3.12 GiB already allocated; 7.38 MiB free; 48.66 MiB cached)
My GPU is 4GB. Though I changed the batch size to 1 and number of workers to 0, I am getting out of memory error while performing both training and testing. May I know the GPU requirement please? or is there any way that I can process it with my current GPU?
The text was updated successfully, but these errors were encountered: