You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Maybe. For most of the standard Gym environments, the environment isn't really the bottleneck, so it's not clear that you get much of a speedup. However, it could be useful for more computationally expensive environments. Do you have any specific use case in mind?
I think most of the RL algorithms with pybullet envs require much training time since pybullet is computationally expensive. For example, chainerrl requires 1.5 days to train kuka-diverse-object-env with DQN.
It seems that some of the continuous control scripts in your code require more than 5 hours to train, so I am implementing a simple multiprocess runner to reduce the training time. I tested it by training kuka-diverse-object-env with 30 CPUs. It saved a lot of time(Maybe I'll send pull request after refactor and test it.) https://github.com/syuntoku14/image_based_controls/blob/master/experiments/runner.py
Hi Chris,
Do you plan to add an EnvRunner class supporting multiprocess?
The text was updated successfully, but these errors were encountered: