New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to extract predictions #97

Closed
vkuznet opened this Issue Nov 10, 2015 · 23 comments

Comments

Projects
None yet
@vkuznet

vkuznet commented Nov 10, 2015

Hi, can someone either point to code example or documentation how to extract final predictions after the training the model. For example, it would be nice to complement existing tutorials, e.g. mnist, and show additional (final) step to get prediction out of the trained model.

@pannous

This comment has been minimized.

pannous commented Nov 10, 2015

prediction=tf.argmax(y,1)
print prediction.eval(feed_dict={x: mnist.test.images})

in the fully_connected_feed.py mnist example:

        prediction=tf.argmax(logits,1)
        best = sess.run([prediction],feed_dict)
        print(best)
@abunsen

This comment has been minimized.

abunsen commented Nov 10, 2015

+1

@vkuznet

This comment has been minimized.

vkuznet commented Nov 10, 2015

Hi,
well I see that step, but if I print the correct_prediction I got
Tensor("Equal:0", shape=TensorShape([Dimension(None)]), dtype=bool) which is
<class 'tensorflow.python.framework.ops.Tensor'>

and I want to get array of numbers (probabilities). My question is how to get
it from Tensor then?

On 0, pannous notifications@github.com wrote:

Test trained model

correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))


Reply to this email directly or view it on GitHub:
#97 (comment)

@rafaljozefowicz

This comment has been minimized.

rafaljozefowicz commented Nov 10, 2015

@pannous

This comment has been minimized.

pannous commented Nov 10, 2015

the array of numbers is returned by eval / run method:

prediction=tf.argmax(y,1)
print prediction.eval(feed_dict={x: mnist.test.images})
@vkuznet

This comment has been minimized.

vkuznet commented Nov 10, 2015

Thanks, this works if I pass session to eval too.
Here is working example for those who are interested:

import input_data
import tensorflow as tf
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
x = tf.placeholder("float", shape=[None, 784])
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x,W) + b)
y_ = tf.placeholder("float", shape=[None, 10])
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
# train data and get results for batches
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
# train the data
for i in range(10):
    batch_xs, batch_ys = mnist.train.next_batch(100)
    sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print "accuracy", sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})
prediction=tf.argmax(y,1)
print "predictions", prediction.eval(feed_dict={x: mnist.test.images}, session=sess)

On 0, pannous notifications@github.com wrote:

the array of numbers is returned by eval / run method:

prediction=tf.argmax(y,1)
print prediction.eval(feed_dict={x: mnist.test.images})

Reply to this email directly or view it on GitHub:
#97 (comment)

@vkuznet

This comment has been minimized.

vkuznet commented Nov 10, 2015

My next question is how to get probabilities of predictions?

@pannous

This comment has been minimized.

pannous commented Nov 10, 2015

just y or y/sum(y)

[per row, see below]

@vkuznet

This comment has been minimized.

vkuznet commented Nov 10, 2015

thanks, it works

probabilities=y
print "probabilities", probabilities.eval(feed_dict={x: mnist.test.images}, session=sess)

On 0, pannous notifications@github.com wrote:

just y or y/sum(y)


Reply to this email directly or view it on GitHub:
#97 (comment)

@pannous

This comment has been minimized.

pannous commented Nov 10, 2015

hold on, we need to norm it per dimension
i.e.
y=[[0.8,0.5,0.1],[0.1,0.2,0.4]]
probabilities=y/[1.4,0.7]
lets see how we can use reduce_sum for that ...
def reduce_sum(input_tensor, reduction_indices=None, keep_dims=False, name=None):
...
probabilities=y/tf.reduce_sum(y,0)

@rafaljozefowicz

This comment has been minimized.

rafaljozefowicz commented Nov 10, 2015

The output of the tf.nn.softmax(.) should already be normalized so it should just work

@pannous

This comment has been minimized.

pannous commented Nov 10, 2015

In this case yes, but it doesn't harm if we put the general solution here.

@danielzak

This comment has been minimized.

danielzak commented Nov 18, 2015

Expanding on the above question, when I try the above suggestion (which works fine for the mnist softmax example) on the mnist convnet example I get the following error:

tensorflow.python.framework.errors.InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder_2' with dtype float

I have just copied the code from http://www.tensorflow.org/tutorials/mnist/pros/index.md and added the following four lines:

...
prediction=tf.argmax(y_conv,1)
print "predictions", prediction.eval(feed_dict={x: mnist.test.images}, session=sess)

probabilities=y_conv
print "probabilities", probabilities.eval(feed_dict={x: mnist.test.images}, session=sess)

Anyone knows what this is?

@danielzak

This comment has been minimized.

danielzak commented Nov 18, 2015

I can answer my own question

The convnet example has an additional variable in the feed_dict, I missed to add that. In this case the feed_dict should look like this: feed_dict = {x: [your_image], keep_prob: 1.0}

@vrv

This comment has been minimized.

Contributor

vrv commented Nov 18, 2015

Closing -- looks like these issues are resolved. Please re-open if there's something else to do here.

@mijung-kim

This comment has been minimized.

mijung-kim commented Oct 8, 2016

I have read all the threads here and got another question. I am currently using

with tf.Graph().as_default(): ... probability = tf.nn.softmax(logits)

In this case how shall I print out or save the probability value in csv format?

Thank you in advance!

@rohitchopra32

This comment has been minimized.

rohitchopra32 commented Apr 18, 2017

i have some code like this

run_fc.py

# Define input placeholders
images_placeholder = tf.placeholder(tf.float32, shape=[None, IMAGE_PIXELS],  name='images')
labels_placeholder = tf.placeholder(tf.int64, shape=[None], name='image-labels')

# Operation for the classifier's result
logits = two_layer_fc.inference(images_placeholder, IMAGE_PIXELS,
  FLAGS.hidden1, CLASSES, reg_constant=FLAGS.reg_constant)

# Operation for the loss function
loss = two_layer_fc.loss(logits, labels_placeholder)

# Operation for the training step
train_step = two_layer_fc.training(loss, FLAGS.learning_rate)

# Operation calculating the accuracy of our predictions
accuracy = two_layer_fc.evaluation(logits, labels_placeholder)

two_layer_fc.py:

def evaluation(logits, labels):
  '''Evaluates the quality of the logits at predicting the label.

  Args:
    logits: Logits tensor, float - [batch size, number of classes].
    labels: Labels tensor, int64 - [batch size].

  Returns:
    accuracy: the percentage of images where the class was correctly predicted.
  '''

  with tf.name_scope('Accuracy'):
    # Operation comparing prediction with true label
    correct_prediction = tf.equal(tf.argmax(logits,1), labels)

    # Operation calculating the accuracy of the predictions
    accuracy =  tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

    # Summary operation for the accuracy
    tf.summary.scalar('train_accuracy', accuracy)

  return accuracy

i want when evaluation funtion that is in

two_layer_fc.py file

find out correct_prediction after that it will show the predicted label and label that is in labels (original label)

i tried this adding this:

prediction=tf.argmax(y,1)
print prediction.eval(feed_dict={x: mnist.test.images})

it only gives me error.

@AKlent

This comment has been minimized.

AKlent commented Oct 12, 2017

Hi, I am following the Tensorflow's "Layer Module" from this tutorial link https://www.tensorflow.org/tutorials/layers. I find the code here different from the one you have mentioned above. You might be able to help me how can I get the results of the predictions and its respective probabilities.
I need to see it for further understanding the model. And if there is a way I can save the results - predictions and probabilities to csv.

Thank you so much for your time.

lukeiwanski pushed a commit to codeplaysoftware/tensorflow that referenced this issue Oct 26, 2017

[OpenCL] Provides SYCL kernels for 3D pooling (#97)
* [OpenCL] Adds SYCL kernels for 3D pooling

Uses simple SYCL kernels to provide implementations for all 3D pooling
ops currently in use. These kernels pass the tests, but haven't really
been optimized.

These need benchmarking to compare with Eigen and CPU kernels.

* [OpenCL] Refactors SYCL kernels to use parameter struct

Moves a lot of the functor parameters into a separate data struct, with
the aim of simplifying the functor code.

* [OpenCL] Removes extra fetching of tensor dimensions

We already had the tensor dimensions passed into
LaunchMaxPooling3dGradOP, so don't need to fetch them from the
tensor.

* [OpenCL] Renames SYCL 3D pooling kernels

Adds '3D' to kernel names.

* [OpenCL] Adds 3D pooling SYCL kernel documentation

* [OpenCL] Adds guards around SYCLDevice typedef

* [OpenCL] Use forward input for SYCL MaxPool3DGradGrad

When we had a mix of SYCL and CPU kernels the forward_input would break
and cause computation problems. Now that we have SYCL kernels for all 3D
pooling operations, this is not a problem.

* [OpenCL] Reformats SYCL 3D pooling code

* [OpenCL] Moves SYCL utils into separate header

* [OpenCL] Simplifies SYCL Pool param contructors

Instead of each constructor initialising the data, simplifies the
constructors to call the first constructor.

lukeiwanski pushed a commit to codeplaysoftware/tensorflow that referenced this issue Oct 26, 2017

[OpenCL] Provides SYCL kernels for 3D pooling (#97)
* [OpenCL] Adds SYCL kernels for 3D pooling

Uses simple SYCL kernels to provide implementations for all 3D pooling
ops currently in use. These kernels pass the tests, but haven't really
been optimized.

These need benchmarking to compare with Eigen and CPU kernels.

* [OpenCL] Refactors SYCL kernels to use parameter struct

Moves a lot of the functor parameters into a separate data struct, with
the aim of simplifying the functor code.

* [OpenCL] Removes extra fetching of tensor dimensions

We already had the tensor dimensions passed into
LaunchMaxPooling3dGradOP, so don't need to fetch them from the
tensor.

* [OpenCL] Renames SYCL 3D pooling kernels

Adds '3D' to kernel names.

* [OpenCL] Adds 3D pooling SYCL kernel documentation

* [OpenCL] Adds guards around SYCLDevice typedef

* [OpenCL] Use forward input for SYCL MaxPool3DGradGrad

When we had a mix of SYCL and CPU kernels the forward_input would break
and cause computation problems. Now that we have SYCL kernels for all 3D
pooling operations, this is not a problem.

* [OpenCL] Reformats SYCL 3D pooling code

* [OpenCL] Moves SYCL utils into separate header

* [OpenCL] Simplifies SYCL Pool param contructors

Instead of each constructor initialising the data, simplifies the
constructors to call the first constructor.
@timmolter

This comment has been minimized.

timmolter commented Jan 12, 2018

Using the Estimator API, I figured out how to extract the predictions and create a confusion matrix. Maybe it will help someone else out.

https://github.com/knowm/HelloTensorFlow/blob/master/src/iris_DNN_classifier.py

predicted_classes = [p["class_ids"][0] for p in predictions]
print(
    "Test Samples, Class Predictions:    {}\n"
    .format(predicted_classes))
@shadzic

This comment has been minimized.

shadzic commented Jan 18, 2018

Thank you, very helpful.

For the DNNLinearCombinedClassifier, I used this:

# predicted class
predictions = model.predict(input_fn=input_fn)
y_pred = [p["class_ids"][0] for p in predictions]

# probability of being predicted as 1
predictions = model.predict(input_fn=input_fn)
y_prob = [p["probabilities"][1] for p in predictions]

Weirdly, I need to re-compute predictions every time to get the lists
(y_prob would return [] if I don't re-run predictions before running y_prob)

@ASWATHISATHEESH

This comment has been minimized.

ASWATHISATHEESH commented Feb 6, 2018

y_output = vqa_model.predict([question_features, image_features])
NameError: name 'question_features' is not defined

what would be the solution

@deepikaverma07

This comment has been minimized.

deepikaverma07 commented Aug 27, 2018

probabilities= []
****in the prediction loop
probabilities.append(np.max(y.eval(feed_dict={x: [test_set]}, session=sess), axis=1)[0])

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment