Skip to content
This repository has been archived by the owner on Sep 29, 2023. It is now read-only.

CUDA memory error #22

Open
viX-shaw opened this issue Jun 12, 2019 · 2 comments
Open

CUDA memory error #22

viX-shaw opened this issue Jun 12, 2019 · 2 comments

Comments

@viX-shaw
Copy link

Can you help me, I am new to pytorch.

**Traceback (most recent call last):
File "SST/train_ua.py", line 246, in
train()
File "SST/train_ua.py", line 184, in train
out = net(img_pre, img_next, boxes_pre, boxes_next, valid_pre, valid_next)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
return self.module(*inputs[0], **kwargs[0])
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/content/SST/layer/sst.py", line 68, in forward
x_next = self.forward_vgg(x_next, self.vgg, sources_next)
File "/content/SST/layer/sst.py", line 179, in forward_vgg
x = vggk
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 338, in forward
self.padding, self.dilation, self.groups)
RuntimeError: CUDA out of memory. Tried to allocate 1.55 GiB (GPU 0; 11.17 GiB total capacity; 9.28 GiB already allocated; 1.12 GiB free; 448.72 MiB cached)

@wangaixue
Copy link

@viX-shaw at least two 16G memory

@EddieEduardo
Copy link

@wangaixue OH, JESUS!!! Mine is just 4G , are there any good solutions making it compatible with PC?Thanks for your reply.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants