-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running PPO on multiple GPU's. #264
Comments
Hey. Depends on what you want to parallelize. If you want to parallelize the algorithm updates itself, then no, there is no support for multi-gpu setup right now. Vectorized environments only parallelize environments. If you have compute-heavy environments running them in parallel might speed up gathering samples, but does not affect algorithm training itself. |
PPO is quite lightweight, so unless you are using a big network for the policy/value network, I would recommend you to get better cpus rather than more gpu. The bottleneck usually comes from the environment simulation not the gradient update. Please take a look at the issue checklist next time as it appears to be a duplicate ;) Related issues:
|
Hi! Is it possible to deploy different training tasks on different GPU? All I know is to set |
could you please open an issue with a minimal code to reproduce? (this seems to be a bug) as a quick fix, you can play with |
Hi! I have spent some time to reproduce this error but failed. Currently, |
Hello,
I would like to run the PPO algorithm https://stable-baselines3.readthedocs.io/en/master/modules/ppo.html on a Google Cloud VM distributed on multiple GPU's. Looking at the documentation I can find below text:
...creating a multiprocess vectorized wrapper for multiple environments, distributing each environment to its own process, allowing significant speed up when the environment is computationally complex.
Does this mean using the vectorized wrapper I should be able to run on multiple GPU's?
Thanks for your help!
The text was updated successfully, but these errors were encountered: