-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why doesn't tensorflow support tensor as the feed_dict? #3389
Comments
I'm new to tensorflow, according to my experience, you really don't need to feed a tf.tensor to feed_dict. Once you have a tensor with value, you can get this value with your_tensor.eval() or sess.run(your_tensor) and then you can feed the output to your feed_dict |
I have also searched about how to get the value of a tensor to solve this problem and seen the solutions like above. However when I try to get image_batch.eval() or even image_buffer.eval(), the program just keep working without stopping or giving any output. I have tried to get a constant value from a tensor and succeeded. But this is just useless with tensor gotten from tf.read_file() and etc. |
If you have your data in tensors anyway, you don't need to use placeholders and |
Yes, I can see from the examples of training procedures that I can just use tensors as input of any ops in tensorflow. However in this case I am using a predefined network so I am confused whether I can change the network definition. That is to say, I have no idea how to change the input of loss function(the definition of node "images" and "prob" is |
StackOverflow is a better venue. One key thing to note is that |
I have the same questions with @wishforgood . How? |
Why is this closed? Author raises legitimate concerns.
Yes it is confusing, make you write complicated (or slow) code -- I think you guys should reopen and address this. |
I have the same opinion as yours. These days, i use the cnn to do some text work. For the input's dimension is too large, i have to change it to other type in case of the memory error. But , what confused me is the same as yous. Why tensor cannot be a feed_dict value!!! |
Looks like you could use a session_handle to do this: Simple case:
The below is an example from the get_session_handle doc:
|
This works for a scalar value but when you try to get handle for a sequence of inputs, it is failing. operation I performed is feeding the sequence of number from -10 to 10 for a function. part of code: val_x = tf.range(-10, 10, 0.1) val_f = sess.run(f, feed_dict = {x: val_x}) But if i replace val_x = tf.range(-10, 10, 0.1) --> val_x = np.arange(-10, 10, 0.1), it works fine without any errors. why is sequence generated using tf.range() is not treated as tensor object? Thanks in Advance! |
@Anilkumares For the feed_dict can't accept the tensor object. As for the f = tf.square(x) + 2 * x + 5, it can be fed with scalars generated by numpy |
You can do something like this in a loop to make batches without using any TensorFlow backend. This will ensure that the output of it remains non-tensor.
|
The short answer to the question is because of Tensorflow makeы simple things complicated |
I may have a different question but similar with yours: if my input data is already in tensor type, how can i feed them into the network? this is very simple and straightforward in pytorch, but as we know, tensorflow uses static computational graph. we have to pre-define the graph and then change the input in a cycle. if the inputs of network and loss function are defined when creating the graph, how can we change it during training? |
One workaround is to construct a graph def and load it in with input_mapping, through this the tf.data.Iterator can be hooked up to any arbitrary graph which previously used placeholders for input. This is especially beneficial for inference, since weights can be frozen. |
For anyone who lands on this question, Lakrish provided a very usable solution (only a few lines of code). I have posted an example using this suggestion at this related SO question: https://stackoverflow.com/questions/38618960/tensorflow-how-to-insert-custom-input-to-existing-graph/57015133#57015133 |
What I'm trying to do
I am trying to extract CNN features for my own images with residual-net based on https://github.com/ry/tensorflow-resnet. I plan to input image data from JPG files before exploring how to convert the images into a single file.
What I have done
I have read https://www.tensorflow.org/versions/r0.9/how_tos/reading_data/index.html and some related materials about how to input data like feeding and placeholder. Here is my code:
What my question is
The code above got the error
TypeError: The value of a feed cannot be a tf.Tensor object. Acceptable feed values include Python scalars, strings, lists, or numpy ndarrays.
You can see I am trying to do all the work in the context of tensorflow. As far as I know, tensorflow is a framework where all the inputs and outputs of the nodes are tensors. However, I am quite confused why feed_dict doesn't support tensor as an input. But the batch_join function return a tensor and so do other ops. I found that in the mnist example of tensorflow there is even another function to produce a batch when tensorflow is providing methods for batching. So I wonder if there is an elegant way to do these things. If this is because of my lack of searching and careful reading I really apologize.The text was updated successfully, but these errors were encountered: