Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is distributed training supported? #7

Closed
arunavo4 opened this issue Mar 23, 2020 · 12 comments
Closed

Is distributed training supported? #7

arunavo4 opened this issue Mar 23, 2020 · 12 comments

Comments

@arunavo4
Copy link

arunavo4 commented Mar 23, 2020

@danijar Thank you for this work. I had this question.

@arunavo4
Copy link
Author

arunavo4 commented Mar 23, 2020

  config.task = 'atari_Breakout'
  config.envs = 1
  config.parallel = 'none'
  config.action_repeat = 2
  config.time_limit = 1000
  config.prefill = 5000
  config.eval_noise = 0.0
  config.clip_rewards = 'none'

can you explain this a bit, cause even when i run with config.parallel = 'none' there seems to be parallel processes running it uses all the cpu but not much gpu . is this the normal behaviour?

Mon Mar 23 14:55:03 2020       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.104      Driver Version: 410.104      CUDA Version: 10.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 108...  Off  | 00000000:06:00.0 Off |                  N/A |
| 28%   50C    P8    17W / 250W |    304MiB / 11178MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0     24508      C   python3                                      147MiB |
|    0     25486      C   python3                                      147MiB |
+-----------------------------------------------------------------------------+

The things are also the same when using 16 envs

  config.task = 'atari_Breakout'
  config.envs = 16
  config.parallel = 'none'
  config.action_repeat = 2
  config.time_limit = 1000
  config.prefill = 5000
  config.eval_noise = 0.0
  config.clip_rewards = 'none'

@arunavo4
Copy link
Author

I feel that this means the whole thing is running on CPU and not the GPU

@arunavo4
Copy link
Author

  config.task = 'atari_Breakout'
  config.envs = 16
  config.parallel = 'process'
  config.action_repeat = 2
  config.time_limit = 1000
  config.prefill = 5000
  config.eval_noise = 0.0
  config.clip_rewards = 'none'

Changing config.parallel = 'none' --> config.parallel = 'process' results in this error

Traceback (most recent call last):
  File "dreamer.py", line 463, in <module>
    main(parser.parse_args())
  File "dreamer.py", line 422, in main
    actspace = train_envs[0].action_space
  File "/home/arunavo/Pairs-Trading/dreamer/wrappers.py", line 395, in action_space
    self._action_space = self.__getattr__('action_space')
  File "/home/arunavo/Pairs-Trading/dreamer/wrappers.py", line 402, in __getattr__
    return self._receive()
  File "/home/arunavo/Pairs-Trading/dreamer/wrappers.py", line 436, in _receive
    message, payload = self._conn.recv()
  File "/usr/lib/python3.6/multiprocessing/connection.py", line 251, in recv
    return _ForkingPickler.loads(buf.getbuffer())
  File "/home/arunavo/Pairs-Trading/dreamer/wrappers.py", line 306, in __getattr__
    return getattr(self._env, name)
  File "/home/arunavo/Pairs-Trading/dreamer/wrappers.py", line 306, in __getattr__
    return getattr(self._env, name)
  File "/home/arunavo/Pairs-Trading/dreamer/wrappers.py", line 306, in __getattr__
    return getattr(self._env, name)
  [Previous line repeated 328 more times]
RecursionError: maximum recursion depth exceeded while calling a Python object

@IcarusWizard
Copy link

@arunavo4 , maybe it is caused by CUDA version. Tensorflow 2.1.0 only supports CUDA 10.1.

After changing the CUDA version, the code runs smoothly on my machine.

@danijar
Copy link
Owner

danijar commented Mar 24, 2020

There are some features that should allow to interact with a vectorized environment. In this case, the agent receives a batch of inputs and produces a batch of actions. The environments are stepped in sync but in parallel, either each using a thread or process. However, this isn't a well tested feature and I can't provide much support for it.

In practice, I've found the computational bottleneck to be training the world model and not environment interaction, so I haven't tested vectorized acting much.

@arunavo4
Copy link
Author

@danijar So you are saying that leaving it to the default is the best way to train it?

@arunavo4
Copy link
Author

@IcarusWizard Did you try with arati? and was you GPU being utilized? I am in the process of upgrading to new cuda I will let you know if i make progress

@IcarusWizard
Copy link

@arunavo4 It works on Atari too, and GPUs are utilized. You just need to pass additional arguments like --action_dist onehot --expl epsilon_greedy to run on discrete mode.

@arunavo4
Copy link
Author

@IcarusWizard Thanks a lot now it finally works! Now it uses the GPU very well.

@danijar
Copy link
Owner

danijar commented Mar 25, 2020

Exactly, those are the necessary flags for discrete actions. You may want to tune some of the other hyper parameters for Atari as well (e.g. kl_scale and deter_size). I will update the repository at some point with my Atari configuration. I'm still working on some details for this.

@danijar danijar changed the title can this Run in a distributed fashion? Does it have learning benefits like Apex Question about distributed training Mar 25, 2020
@danijar danijar changed the title Question about distributed training Is distributed training supported? Mar 26, 2020
@CR-Gjx
Copy link

CR-Gjx commented Oct 27, 2020

@IcarusWizard Thanks a lot now it finally works! Now it uses the GPU very well.

Hi! I cannot run dreamer on the GPU too, can you share some tips about it?

@IcarusWizard
Copy link

IcarusWizard commented Oct 27, 2020

@CR-Gjx Just make sure you use exactly Tensorflow 2.1.0 and CUDA 10.1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants