Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using CPU vs GPU in training with ML-Agents #1246

Closed
adam-pociejowski opened this issue Sep 20, 2018 · 7 comments
Closed

Using CPU vs GPU in training with ML-Agents #1246

adam-pociejowski opened this issue Sep 20, 2018 · 7 comments

Comments

@adam-pociejowski
Copy link

Hello,
I'm using ML-Agents on Windows 10 my hardware is:
CPU: ADM Ryzen 7 1700 Eight-Core Processor
RAM: 16GB
GPU: NVIDIA GeForce GTX 1060 6GB

I tried using ML-Agents using CPU and GPU
I activated CUDA support and installed tensorflow-gpu following this guide:
https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation-Windows.md

Surprisingly I had better performace using CPU than GPU.
I was expecting that using GPU to train would give me better performance than CPU.

To learn i was using script from guide:
python learn.py ./pushblock/1 --train --run-id=1

ML-Agents window was not responding for some time after run script using GPU, and training was much slower.

Is there any sense to using GPU instead of CPU to train using ML-Agents?
Maybe I did something wrong?

@MarcoMeter
Copy link

Hi @adam-pociejowski
This PPO implementation is not optimized for the use of a GPU. In general, it is not that easy to optimize Reinforcement Learning for the use of a GPU. So you are better of with a CPU currently.

@adam-pociejowski
Copy link
Author

Thanks for fast response!
As you said I will use my CPU.

@maystroh
Copy link

maystroh commented Dec 6, 2018

@MarcoMeter When are you planning to support GPU? I'm wondering because I'm using visual observations which requires GPU capabilities to run the trainings.

@shihzy
Copy link
Contributor

shihzy commented Dec 6, 2018

Hi @maystroh - can you clarify your ask for GPU support?

@maystroh
Copy link

maystroh commented Dec 6, 2018

Sorry my question was not clear enough. I meant by GPU support to have the PPO implementation optimized for GPU since working with visual observation needs GPU more than CPU, especially if we use use a more complex CNN than the 2 layers CNN (the one implemented so far in the library).

@ghost
Copy link

ghost commented Mar 12, 2019

When I tried A3C with the same spec. PC as 'adam' except GPU, mine is 1080, CPU was surely faster than GPU. It's clear and natural as A3C uses multi-threads. In windows 10, even GPU version made some garbage values in neural weights and was not trainable.

@lock
Copy link

lock bot commented Mar 11, 2020

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked as resolved and limited conversation to collaborators Mar 11, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants