Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gpu running out of memory #4

Closed
jamel-mes opened this issue Aug 16, 2018 · 9 comments
Closed

gpu running out of memory #4

jamel-mes opened this issue Aug 16, 2018 · 9 comments

Comments

@jamel-mes
Copy link

Good afternoon,

I've been using the code from the develop branch with pytorch 0.4. I am having this memory issue below when executing this piece of code from the notebook example:

    ### Transfer learning 
    RL.transfer_learning(transfer_data, n_epochs=n_transfer)
    _, prediction = estimate_and_update(n_to_generate)
    prediction_log.append(prediction)
    if len(np.where(prediction >= threshold)[0])/len(prediction) > 0.15:
        threshold = min(threshold + 0.05, 0.8)

RuntimeError: cuda runtime error (2) : out of memory at /opt/conda/conda-bld/pytorch_1525909934016/work/aten/src/THC/generic/THCTensorMath.cu:35

Any idea of what might be causing this problem?

@Mariewelt
Copy link
Collaborator

Mariewelt commented Aug 16, 2018

Hi @jamel-mes

My guess is that your GPU doesn't have enough memory to store the model. What is your GPU model and memory?

UPD:
you can check this by running nvidia-smi command in the terminal.

@jamel-mes
Copy link
Author

I have a 1080 with 8Gb

@Mariewelt
Copy link
Collaborator

Mariewelt commented Aug 16, 2018

The model takes ~9Gbs, that's why you are having the out-of-memory error. You can reduce the number of parameters for the generator, which is defined in this block:

hidden_size = 1500
stack_width = 1500
stack_depth = 200

but in this case you will need to train the generative model from scratch, as we provide the pre-trained model only for the configuration above.

@jamel-mes
Copy link
Author

Great, thank you for your help!

@Mariewelt
Copy link
Collaborator

Mariewelt commented Aug 17, 2018

@jamel-mes

I think, there is another thing you can try in order to squeeze into your 8Gbs of memory without changing the generator. Try reducing batch size in Policy gradient with experience replay and Policy gradient without experience replay steps from default 10 to 5:

for _ in range(n_policy_replay):
rewards.append(RL.policy_gradient_replay(gen_data, replay, threshold=threshold, n_batch=5))

for _ in range(n_policy):
rewards.append(RL.policy_gradient(gen_data, threshold=threshold, n_batch=5))

With this batch size on my machine the model took 6Gbs of memory.

@Mariewelt Mariewelt reopened this Aug 17, 2018
@jamel-mes
Copy link
Author

decreasing batch size does the trick!

@gmseabra
Copy link

The model takes ~9Gbs, that's why you are having the out-of-memory error

Is there a way to estimate the memory need beforehand?

@Mariewelt
Copy link
Collaborator

@gmseabra technically yes, the values are stores as float32. I would say that the easiest way to reduce memory usage is just decreasing batch size as we discussed below. In this scenario, you can keep using pretrained model and just try multiple batch sizes to see what fits into your GPU memory.

@jamel-mes

I think, there is another thing you can try in order to squeeze into your 8Gbs of memory without changing the generator. Try reducing batch size in Policy gradient with experience replay and Policy gradient without experience replay steps from default 10 to 5:

for _ in range(n_policy_replay):
rewards.append(RL.policy_gradient_replay(gen_data, replay, threshold=threshold, n_batch=5))

for _ in range(n_policy):
rewards.append(RL.policy_gradient(gen_data, threshold=threshold, n_batch=5))

With this batch size on my machine the model took 6Gbs of memory.

@gmseabra
Copy link

I was actually thinking about the possibility of checking the memory size and adjusting n_batch on the fly, depending on the GPU memory available...

But yes, reducing the batch size works for me too (on a GTX 1060, with 6GB mem).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants