Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Out of memory for NLLLoss even the batch size is small #194

Open
serenayj opened this issue Apr 4, 2020 · 0 comments
Open

Out of memory for NLLLoss even the batch size is small #194

serenayj opened this issue Apr 4, 2020 · 0 comments

Comments

@serenayj
Copy link

serenayj commented Apr 4, 2020

Hi I'm using this framework on my dataset, everything works fine on CPU, but when I moved them to gpu, it had the error as following:
File "/home/ibm_decoder/DecoderRNN.py", line 107, in forward_step predicted_softmax = function(self.out(output.contiguous().view(-1, self.hidden_size)), dim=1).view(batch_size, output_size, -1) File "/home/anaconda2/envs/lib/python3.6/site-packages/torch/nn/functional.py", line 1317, in log_softmax ret = input.log_softmax(dim) RuntimeError: CUDA out of memory. Tried to allocate 2.77 GiB (GPU 0; 10.76 GiB total capacity; 8.66 GiB already allocated; 943.56 MiB free; 9.06 GiB reserved in total by PyTorch)
The batch size is only 32, so I don't know what was wrong and what caused such big memory allocation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant