You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
importtensorflowastfimporttensorlayerastlsess=tf.InteractiveSession()
# prepare dataX_train, y_train, X_val, y_val, X_test, y_test= \
tl.files.load_mnist_dataset(shape=(-1,784))
# define placeholderx=tf.placeholder(tf.float32, shape=[None, 784], name='x')
y_=tf.placeholder(tf.int64, shape=[None, ], name='y_')
# define the networknetwork=tl.layers.InputLayer(x, name='input_layer')
network=tl.layers.DropoutLayer(network, keep=0.8, name='drop1')
network=tl.layers.DenseLayer(network, n_units=800,
act=tf.nn.relu, name='relu1')
network=tl.layers.DropoutLayer(network, keep=0.5, name='drop2')
network=tl.layers.DenseLayer(network, n_units=800,
act=tf.nn.relu, name='relu2')
network=tl.layers.DropoutLayer(network, keep=0.5, name='drop3')
# the softmax is implemented internally in tl.cost.cross_entropy(y, y_) to# speed up computation, so we use identity here.# see tf.nn.sparse_softmax_cross_entropy_with_logits()network=tl.layers.DenseLayer(network, n_units=10,
act=tf.identity,
name='output_layer')
# define cost function and metric.y=network.outputscost=tl.cost.cross_entropy(y, y_)
correct_prediction=tf.equal(tf.argmax(y, 1), y_)
acc=tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
y_op=tf.argmax(tf.nn.softmax(y), 1)
# define the optimizertrain_params=network.all_paramstrain_op=tf.train.AdamOptimizer(learning_rate=0.0001, beta1=0.9, beta2=0.999,
epsilon=1e-08, use_locking=False).minimize(cost, var_list=train_params)
# initialize all variablessess.run(tf.initialize_all_variables())
# print network informationnetwork.print_params()
network.print_layers()
# train the networktl.utils.fit(sess, network, train_op, cost, X_train, y_train, x, y_,
acc=acc, batch_size=500, n_epoch=500, print_freq=5,
X_val=X_val, y_val=y_val, eval_train=False)
# evaluationtl.utils.test(sess, network, acc, X_test, y_test, x, y_, batch_size=None, cost=cost)
# save the network to .npz filetl.files.save_npz(network.all_params , name='model.npz')
sess.close()
I am a little confused with the follow line in tutorial_mnist_simple.py
39: y_op = tf.argmax(tf.nn.softmax(y), 1)
According to the documentation, """ y_op is the integer output represents the class index. """
It seems that we will compare "y_op"(output of network) and "y_"(ground_truth label).
But on line 36, we have defined cost function as follows:
36: cost = tl.cost.cross_entropy(y, y_)
So, we compare the variable "y" (not "y_op") and “y_”.
However, I found that the variable "y_op" never been used in tutorial_mnist_simple.py. (from line 40 to last line, it never appear again)
I don't know how the line work in tutorial_mnist_simple.py? Does it means that the line "y_op = tf.argmax(tf.nn.softmax(y), 1)" doesn't work in tutorial_mnist_simple.py?
it's really strange!
The text was updated successfully, but these errors were encountered:
y_op = tf.argmax(tf.nn.softmax(y), 1) do nothing in this tutorial, we put it here just for telling users how to get integer output from softmax output.
Hi all.
I am a little confused with the follow line in tutorial_mnist_simple.py
39: y_op = tf.argmax(tf.nn.softmax(y), 1)
According to the documentation, """ y_op is the integer output represents the class index. """
It seems that we will compare "y_op"(output of network) and "y_"(ground_truth label).
But on line 36, we have defined cost function as follows:
36: cost = tl.cost.cross_entropy(y, y_)
So, we compare the variable "y" (not "y_op") and “y_”.
However, I found that the variable "y_op" never been used in tutorial_mnist_simple.py. (from line 40 to last line, it never appear again)
I don't know how the line work in tutorial_mnist_simple.py? Does it means that the line "y_op = tf.argmax(tf.nn.softmax(y), 1)" doesn't work in tutorial_mnist_simple.py?
it's really strange!
The text was updated successfully, but these errors were encountered: