-
Notifications
You must be signed in to change notification settings - Fork 146
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running out of GPU memory #21
Comments
PR #18 recently changed training on GPU. It is possible that it puts too many ops on the GPU. What tensors fails? What GPU do you use? Can you please set |
@wwxFromTju Could you provide an update, please? |
@danijar what update? |
I asked a few questions in my first comment that could help debug your problem. Did you have a chance to try those? |
I disabled GPU usage by default and implemented splitting episodes (using the |
Hey all:
When I try to run the train.py, it take the all GPU memory, and I try to add per_process_gpu_memory_fraction, it can't work.
so how to change the GPU memory use?
The text was updated successfully, but these errors were encountered: