Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running out of GPU memory #21

Closed
wwxFromTju opened this issue Jan 2, 2018 · 5 comments
Closed

Running out of GPU memory #21

wwxFromTju opened this issue Jan 2, 2018 · 5 comments

Comments

@wwxFromTju
Copy link

Hey all:

When I try to run the train.py, it take the all GPU memory, and I try to add per_process_gpu_memory_fraction, it can't work.

so how to change the GPU memory use?

@danijar
Copy link
Contributor

danijar commented Jan 5, 2018

PR #18 recently changed training on GPU. It is possible that it puts too many ops on the GPU. What tensors fails? What GPU do you use? Can you please set use_gpu = False in the config and verify that CPU-only training works?

@danijar
Copy link
Contributor

danijar commented Jan 11, 2018

@wwxFromTju Could you provide an update, please?

@wwxFromTju
Copy link
Author

@danijar what update?

@danijar
Copy link
Contributor

danijar commented Jan 11, 2018

I asked a few questions in my first comment that could help debug your problem. Did you have a chance to try those?

@danijar
Copy link
Contributor

danijar commented Jan 29, 2018

I disabled GPU usage by default and implemented splitting episodes (using the chunk_length and batch_size config options) to be able to train on episodes with many time steps or large observations.

@danijar danijar closed this as completed Jan 29, 2018
@danijar danijar changed the title GPU memory Running out of GPU memory Jan 29, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants