Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

confusion of MNIST example #39

Closed
HengDing890 opened this issue Dec 16, 2016 · 1 comment
Closed

confusion of MNIST example #39

HengDing890 opened this issue Dec 16, 2016 · 1 comment

Comments

@HengDing890
Copy link

HengDing890 commented Dec 16, 2016

Hi all.

import tensorflow as tf
import tensorlayer as tl

sess = tf.InteractiveSession()

# prepare data
X_train, y_train, X_val, y_val, X_test, y_test = \
                                tl.files.load_mnist_dataset(shape=(-1,784))
# define placeholder
x = tf.placeholder(tf.float32, shape=[None, 784], name='x')
y_ = tf.placeholder(tf.int64, shape=[None, ], name='y_')

# define the network
network = tl.layers.InputLayer(x, name='input_layer')
network = tl.layers.DropoutLayer(network, keep=0.8, name='drop1')
network = tl.layers.DenseLayer(network, n_units=800,
                                act = tf.nn.relu, name='relu1')
network = tl.layers.DropoutLayer(network, keep=0.5, name='drop2')
network = tl.layers.DenseLayer(network, n_units=800,
                                act = tf.nn.relu, name='relu2')
network = tl.layers.DropoutLayer(network, keep=0.5, name='drop3')
# the softmax is implemented internally in tl.cost.cross_entropy(y, y_) to
# speed up computation, so we use identity here.
# see tf.nn.sparse_softmax_cross_entropy_with_logits()
network = tl.layers.DenseLayer(network, n_units=10,
                                act = tf.identity,
                                name='output_layer')

# define cost function and metric.
y = network.outputs
cost = tl.cost.cross_entropy(y, y_)
correct_prediction = tf.equal(tf.argmax(y, 1), y_)
acc = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
y_op = tf.argmax(tf.nn.softmax(y), 1)

# define the optimizer
train_params = network.all_params
train_op = tf.train.AdamOptimizer(learning_rate=0.0001, beta1=0.9, beta2=0.999,
                            epsilon=1e-08, use_locking=False).minimize(cost, var_list=train_params)

# initialize all variables
sess.run(tf.initialize_all_variables())

# print network information
network.print_params()
network.print_layers()

# train the network
tl.utils.fit(sess, network, train_op, cost, X_train, y_train, x, y_,
            acc=acc, batch_size=500, n_epoch=500, print_freq=5,
            X_val=X_val, y_val=y_val, eval_train=False)

# evaluation
tl.utils.test(sess, network, acc, X_test, y_test, x, y_, batch_size=None, cost=cost)

# save the network to .npz file
tl.files.save_npz(network.all_params , name='model.npz')
sess.close()

I am a little confused with the follow line in tutorial_mnist_simple.py

39: y_op = tf.argmax(tf.nn.softmax(y), 1)

According to the documentation, """ y_op is the integer output represents the class index. """
It seems that we will compare "y_op"(output of network) and "y_"(ground_truth label).

But on line 36, we have defined cost function as follows:

36: cost = tl.cost.cross_entropy(y, y_)

So, we compare the variable "y" (not "y_op") and “y_”.

However, I found that the variable "y_op" never been used in tutorial_mnist_simple.py. (from line 40 to last line, it never appear again)

I don't know how the line work in tutorial_mnist_simple.py? Does it means that the line "y_op = tf.argmax(tf.nn.softmax(y), 1)" doesn't work in tutorial_mnist_simple.py?

it's really strange!

@zsdonghao
Copy link
Member

y_op = tf.argmax(tf.nn.softmax(y), 1) do nothing in this tutorial, we put it here just for telling users how to get integer output from softmax output.

zsdonghao added a commit that referenced this issue May 4, 2019
 Update docs and code style for 2.0 release
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants