Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Providing resource variables to feed_dict can be confusing #19884

Closed
ecvgit opened this issue Jun 10, 2018 · 8 comments
Closed

Providing resource variables to feed_dict can be confusing #19884

ecvgit opened this issue Jun 10, 2018 · 8 comments
Assignees
Labels
type:support Support issues

Comments

@ecvgit
Copy link

ecvgit commented Jun 10, 2018

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow):Yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):Ubuntu 16.04
  • TensorFlow installed from (source or binary):Binary
  • TensorFlow version (use command below):1.8
  • Python version:3.6.2
  • Bazel version (if compiling from source):
  • GCC/Compiler version (if compiling from source):
  • CUDA/cuDNN version:None
  • GPU model and memory:CPU
  • Exact command to reproduce:

Describe the problem

According to the documentation, the Tf.contrib.eager has classes that can be used both in Eager and Graph modes. However the results are not consistent when we pass feed_dict to a variable.

Source code / logs

import tensorflow as tf
tfe = tf
x = tfe.Variable(4)
y = x * x
with tf.Session() as sess:
     sess.run(tf.global_variables_initializer())
     print(sess.run(y, feed_dict={x: 15}))  # Prints 225 which is correct

tfe = tf.contrib.eager
x = tfe.Variable(4)
y = x * x

with tf.Session() as sess:
     sess.run(tf.global_variables_initializer())
     print(sess.run(y, feed_dict={x: 15}))  #Prints 16, which is not consistent with 225 above

@marshalhayes
Copy link
Contributor

Hi @ecvgit. I am not too familiar with eager execution; however, I believe you have to enable it on the start of your program. Also, why are you using tf.contrib.eager? All you need to do is enable eager execution with tf.enable_eager_execution() and everything should work spectacularly.

The point of eager execution is to evaluate directly like with normal Python objects, so I also believe you don't need to execute it within a tf.Session(). Hope this helps.

@ecvgit
Copy link
Author

ecvgit commented Jun 10, 2018

In this case, I am not using eager execution at all. I am using graph execution using the tf.Variable and tfe.Variable. My understanding is that if I use tfe.Variable (instead of tf.Variable), then I should be able to switch out graph and eager modes easily. In order for this work, the results should be consistent between tf.Variable and tfe.Variable. As the above example shows, they are not. If this is expected behavior, it is not clear from the documentation.

@marshalhayes
Copy link
Contributor

Hmm, I'm not sure I understand this. I'll let someone else pick up this Issue. Perhaps the assigned @tatianashp can help you.

@tatianashp tatianashp added the comp:eager Eager related issues label Jun 15, 2018
@tatianashp
Copy link
Member

@asimshankar Do you have any thoughts?

@asimshankar
Copy link
Contributor

asimshankar commented Jun 15, 2018

Thanks for bringing this up, I can understand the confusion.

To be clear, the intention is that symbols in the tf.contrib.eager namespace can be used in both eager and graph execution, not that they are equivalent to other symbols in the tf namespace. So, tfe.Variable works with both graph and eager execution, but its behavior is not always the same as tf.Variable.

In fact, tfe.Variable is a different implementation of variables called "resource variables" in the implementation (introduced independently of eager execution), which have more clearly defined memory semantics. We're in the process of making resource variables more prevalent (and in fact, they are required for TPUs as well) because of the saner memory model.

Now, when it comes to feed_dict, the intention there is to be able to feed values for Tensors, not Variables. The fact that it works for variables is more of an accident :). We perhaps should make that be an error (at least for resource variables) instead of the seemingly mysterious behavior you see (CC @alextp - who is also writing up some documentation around this).

The program you've described above:

x = tfe.Variable(4)
y = x * x

is essentially shorthand for the program:

x = tfe.Variable(4)
x_1 = x.read_value()  # Return a Tensor corresponding to the current value of the Variable
y = x_1 * x_1

And when expressed like this, you could use feed_dict:

import tensorflow as tf
tfe = tf.contrib.eager

x = tfe.Variable(4)
x_1 = x.read_value()
y = x_1 * x_1

with tf.Session() as sess:
     sess.run(tf.global_variables_initializer())
     print(sess.run(y, feed_dict={x_1: 15})) # Will print 225

Though this works, in general, do you typically include variables in the feed_dict?

I understand this is confusing, hopefully the explanation makes sense.
The bug itself isn't about eager, so I'm going to go ahead and adjust the title of the bug to reflect that while we figure out how to more clearly make this an error.

Hope that sounds reasonable. Thanks!

@asimshankar asimshankar removed the comp:eager Eager related issues label Jun 15, 2018
@asimshankar asimshankar changed the title Results not consistent in eager mode when passing feed_dict to a variable Providing resource variables to feed_dict can be confusing Jun 15, 2018
@tensorflowbutler
Copy link
Member

Nagging Assignee @tatianashp: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly.

@tensorflowbutler
Copy link
Member

Nagging Assignee @tatianashp: It has been 29 days with no activity and this issue has an assignee. Please update the label and/or status accordingly.

@tensorflowbutler
Copy link
Member

Nagging Assignee @tatianashp: It has been 44 days with no activity and this issue has an assignee. Please update the label and/or status accordingly.

@tatianashp tatianashp added the type:support Support issues label Jul 29, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type:support Support issues
Projects
None yet
Development

No branches or pull requests

5 participants