-
Notifications
You must be signed in to change notification settings - Fork 135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gpu running out of memory #4
Comments
Hi @jamel-mes My guess is that your GPU doesn't have enough memory to store the model. What is your GPU model and memory? UPD: |
I have a 1080 with 8Gb |
The model takes ~9Gbs, that's why you are having the out-of-memory error. You can reduce the number of parameters for the generator, which is defined in this block:
but in this case you will need to train the generative model from scratch, as we provide the pre-trained model only for the configuration above. |
Great, thank you for your help! |
I think, there is another thing you can try in order to squeeze into your 8Gbs of memory without changing the generator. Try reducing batch size in Policy gradient with experience replay and Policy gradient without experience replay steps from default 10 to 5:
With this batch size on my machine the model took 6Gbs of memory. |
decreasing batch size does the trick! |
Is there a way to estimate the memory need beforehand? |
@gmseabra technically yes, the values are stores as float32. I would say that the easiest way to reduce memory usage is just decreasing batch size as we discussed below. In this scenario, you can keep using pretrained model and just try multiple batch sizes to see what fits into your GPU memory.
|
I was actually thinking about the possibility of checking the memory size and adjusting n_batch on the fly, depending on the GPU memory available... But yes, reducing the batch size works for me too (on a GTX 1060, with 6GB mem). |
Good afternoon,
I've been using the code from the develop branch with pytorch 0.4. I am having this memory issue below when executing this piece of code from the notebook example:
RuntimeError
:
cuda runtime error (2) : out of memory at /opt/conda/conda-bld/pytorch_1525909934016/work/aten/src/THC/generic/THCTensorMath.cu:35Any idea of what might be causing this problem?
The text was updated successfully, but these errors were encountered: