-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tf.config.set_soft_device_placement() seems to have no effect #29342
Comments
Also, if I activate soft placement and I try to place an operation on a GPU device that does not exist, I still get an exception. I expected TF to fallback to using the CPU: import tensorflow as tf
tf.config.set_soft_device_placement(True)
with tf.device("/gpu:1"):
f = tf.Variable(42.0) Raises this exception:
|
Hey, have you had a look at this example from the GPU guide at tensorflow.org
To me it looks like TF is choosing which device is then suitable to execute this op:
from TF config |
Hi @lufol, Thanks for your answer. Yes I saw this doc, that's actually why I filed this bug. I don't see what soft placement would change in this example. I expect soft placement to change something when the user requests a specific device but there is no op for that device, or the device does not exist. This would be useful if you want to write a program and deploy it on machines that may or may not have GPUs, for example. So on a machine without any GPU, I expect the following code to work without any error, and just fallback to placing the variable on the CPU: tf.config.set_soft_device_placement(True)
with tf.device("/gpu:0"):
x = tf.Variable(1.0) Perhaps I'm misunderstanding what |
Okay, I got it: the semantics of soft placement have changed since TF 1. In TF 1, the following code works fine on a machine without any GPU (I just tried it): import tensorflow as tf
with tf.device("/gpu:42"): # there is no such device
i = tf.Variable(123) # plus integers are not allowed on GPUs
# but let's make TF super soft and tolerant:
config = tf.ConfigProto()
config.allow_soft_placement = True
with tf.Session(config=config) as sess:
sess.run(i.initializer)
print(sess.run(i)) Prints 123, no problem. :) That's why I was surprised that it didn't work in TF 2.0. I'm actually not sure when you would ever need to call |
From what I understand, you either use
So I guess in your case the first option would be best. |
Hi @lufol, I just ran some tests, and it really seems like I'm quite puzzled. |
Have tried to reproduce on Colab with TF 2.0.0-dev20190527 with set_soft_device_placement as True as well as False and was able to get same result in both the scenarios as mentioned in the issue. |
Yes @ageron , you are right.
Maybe one could add something to the doc explaining the behaviour if Behaviour is reproducable here. |
@jaingaurav any comments? |
Sorry for the delay. There have been multiple discussions about this internally. It seems that soft placement is respected by |
Thanks @jaingaurav. I'm just thinking about TF 2.x, I understand that it must keep the same behavior in TF 1.x. |
System information
Yes
MacOSX 10.13.6
N/A
binary
VERSION='2.0.0-dev20190527'
GIT_VERSION='v1.12.1-2821-gc5b8e15064'
3.5
N/A
N/A
CUDA 10.0 (it's just a Colab GPU instance)
Tesla P4 15079MiB
Describe the current behavior
The
tf.config.set_soft_device_placement()
function seems to have no effect when I create an integer variable and try to place it on a GPU, I still get an exception.Describe the expected behavior
I expect soft placement to fallback to using the CPU. No error.
Code to reproduce the issue
Other info / logs
The code above causes the following exception:
The text was updated successfully, but these errors were encountered: