Skip to content
This repository has been archived by the owner on Dec 29, 2022. It is now read-only.

Specify GPU Card(s) to run on #3

Open
mschonwe opened this issue Dec 10, 2015 · 3 comments
Open

Specify GPU Card(s) to run on #3

mschonwe opened this issue Dec 10, 2015 · 3 comments

Comments

@mschonwe
Copy link

This may be outside the scope of the intended use of this project, but is it possible to specify which GPU card to run on from prettytensor? I have a long running training (non-tensorflow) on GPU:0, but the default for PT seems to be to run (only) on GPU:0.

@eiderman
Copy link
Contributor

Pretty Tensor is completely compatible with the device scoping mechanism in TF.

If you wrap your model with:

with tf.device('/gpu:1'):
  build_my_model()

then it will assign the appropriate device. When you explicitly specify a gpu device, no attempt is made to put cpu-only ops on a cpu, so for those ops (such as lookup_embedding) you would either need to be in a nested device context or built before the gpu device specification.

Intelligently guessing the correct device is outside the scope of PT, but support for explicit assignments is definitely in scope.

@mschonwe
Copy link
Author

Could you provide an example, in the shakespeare.py, where would the with tf.device('/gpu:1') go?

Presumably the change you made (Issue #1) for embedding lookup would provide the necessary assignment for that Op?
with tf.device('/cpu:0'):
embedded = text_in.embedding_lookup(CHARS, [EMBEDDING_SIZE])

@eiderman
Copy link
Contributor

It requires 2 changes and you've alerted me to a few more bugs with device placement.

Before entering the gpu scope, please create the global variable (either on the default device or explicitly on cpu). I added it on line 150:

global_step = pt.global_step()

I surrounded 63-73 in the gpu device context.

Since I don't think that this should be as hard as it is, I've opened a new issue to track this.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants