Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

question about accuracy #26

Open
ayqh opened this issue May 12, 2016 · 5 comments
Open

question about accuracy #26

ayqh opened this issue May 12, 2016 · 5 comments

Comments

@ayqh
Copy link

ayqh commented May 12, 2016

int "recurrent network" the loss was calculated by the function tf.nn.softmax_cross_entropy_with_logits ,and the correct_pred is calculated by tf.equal(tf.argmax(pred,1), tf.argmax(y,1)), which take "pred " directly, how dose this make sense? I think it should be softmax(pred) instead of pred, but I code just works fine. This really confuse me. Can somebody explain whats going on ? thanks!

@aymericdamien
Copy link
Owner

It is working fine because it is using argmax of pred. Last layer of RNN example is a 10 dimension vector (for every label). So the predicted class is represented by that layer highest value index (that you get with argmax). Softmax is just used to squash values between 0 and 1.
Softmax is used while training because having a normalized probabilities distribution may help for better performances, but when predicting, softmax is actually not needed.

@aymericdamien
Copy link
Owner

It is hard to say without knowing your data, because network structure is also dependent of your data. If you are parsing words (ids), then your need to add an embedding layer.

@jakob-grabner
Copy link

I also have a question to recurrent network. The RNN function returns only the last model output. return tf.matmul(outputs[-1], weights['out']) + biases['out'] which means the loss is also only calculated from the last output. One example in the Tensorflow repo (ptb_word_lm) uses the every output, not only the output of the last step. What is the right approach, or on what does it depend?

@aymericdamien
Copy link
Owner

aymericdamien commented Jun 3, 2016

It is because seq2seq is an encoding/decoding process that output a sequence, so calculating loss for every output is important. However in our example, we are simply doing classification over a whole sequence with a single output (predicted class), so only the last output is meaningful (which is the output after all timesteps have been 'processed').

@christophetrinh
Copy link

I am very confused with the terms batch_size and n_steps. Does the lstm updates his parameters after n_steps ? In the recurrent network, the lstm is feed with a n_steps list of [batch_size,n_input]. So, in the case of the MNIST classification, the cell is feed with 128 samples of 28pixels every step in range(n_steps) ??

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants