You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Dec 29, 2022. It is now read-only.
This may be outside the scope of the intended use of this project, but is it possible to specify which GPU card to run on from prettytensor? I have a long running training (non-tensorflow) on GPU:0, but the default for PT seems to be to run (only) on GPU:0.
The text was updated successfully, but these errors were encountered:
Pretty Tensor is completely compatible with the device scoping mechanism in TF.
If you wrap your model with:
with tf.device('/gpu:1'):
build_my_model()
then it will assign the appropriate device. When you explicitly specify a gpu device, no attempt is made to put cpu-only ops on a cpu, so for those ops (such as lookup_embedding) you would either need to be in a nested device context or built before the gpu device specification.
Intelligently guessing the correct device is outside the scope of PT, but support for explicit assignments is definitely in scope.
Could you provide an example, in the shakespeare.py, where would the with tf.device('/gpu:1') go?
Presumably the change you made (Issue #1) for embedding lookup would provide the necessary assignment for that Op?
with tf.device('/cpu:0'):
embedded = text_in.embedding_lookup(CHARS, [EMBEDDING_SIZE])
This may be outside the scope of the intended use of this project, but is it possible to specify which GPU card to run on from prettytensor? I have a long running training (non-tensorflow) on GPU:0, but the default for PT seems to be to run (only) on GPU:0.
The text was updated successfully, but these errors were encountered: