-
-
Notifications
You must be signed in to change notification settings - Fork 813
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA error: out of memory #22
Comments
Hi,
We haven't finished checking the multi-GPU version yet, so we put these files in |
I just wanted to make sure you saw, I referred to both the main directory and the multigpu directory. Out of the box, elegantrl/run.py does not work for me with the same out of memory above error. I tried both to see if one example would work. I am unable to find a configuration that works. I have tried lowering the net size, batch size, rollout size, etc.
|
We have fully upgraded ElegantRL and now supports multiple GPU training (1~8 GPU). Now the problem you mentioned has been resolved. I'm sorry that we have been busy developing the 80 GPU version (Cloud platform) of ElegantRL, and we were unable to reply to you in time. I will close this question in 3 days. |
Hello,
Running the
run.py
in both the main directory and in the MultiGPU directory leads me to have an error:This error persists no matter what batch size I assign or net size I specify and doesnt matter if I try to use Multi-GPU or the main
elegantrl/run.py
file.I am running this on an Nvidia Quadro 4000 with 8gb of RAM.
In order to get the examples to work, I have to specify the GPU ID of
"-1"
EDIT:
If I set the
rollout_num
to 1 the error changes to this:Researching it still appears that it is a memory problem, but as far as I can tell the "explore" process should take up about 1.5gb of memory but I have almost 4gb free.
The text was updated successfully, but these errors were encountered: