Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

choose GPU device #17

Closed
kingfengji opened this issue May 24, 2016 · 4 comments
Closed

choose GPU device #17

kingfengji opened this issue May 24, 2016 · 4 comments
Labels

Comments

@kingfengji
Copy link

hi there, great one!

one question, I got 3 GPUs to work with, how to choose which GPU to use?

I mean, I dont want to use gpu_0 cause my monitors are connected with it.
I wish to use GPU_1 instead...

THanks!

@anishathalye
Copy link
Owner

There's currently no clean way to do it via command line switches, etc - you need to modify the code.

See how we specify that we want to use the CPU here? You can do something similar here and force it to use /gpu:1. See this TensorFlow documentation for more information.

@anishathalye
Copy link
Owner

Btw, if you want to improve the code so that you can do this kind of stuff via command line switches, pull requests are welcome 😄

@rayrayson
Copy link

Have you tried setting the "CUDA_VISIBLE_DEVICES" environment variable before starting neural_style.py ?

Ref CUDA doc: http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars

It worked for us (the Open Grid Scheduler/Grid Engine project) when we needed to control which GPU is used in a job when we did the Multi-Core Processor Binding with hwloc project:

http://gridscheduler.sourceforge.net/projects/hwloc/GridEnginehwloc.html

@anishathalye
Copy link
Owner

Going to assume this is resolved. If not, feel free to reopen.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants