New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
question about accuracy #26
Comments
It is working fine because it is using argmax of pred. Last layer of RNN example is a 10 dimension vector (for every label). So the predicted class is represented by that layer highest value index (that you get with argmax). Softmax is just used to squash values between 0 and 1. |
It is hard to say without knowing your data, because network structure is also dependent of your data. If you are parsing words (ids), then your need to add an embedding layer. |
I also have a question to recurrent network. The RNN function returns only the last model output. |
It is because seq2seq is an encoding/decoding process that output a sequence, so calculating loss for every output is important. However in our example, we are simply doing classification over a whole sequence with a single output (predicted class), so only the last output is meaningful (which is the output after all timesteps have been 'processed'). |
I am very confused with the terms batch_size and n_steps. Does the lstm updates his parameters after n_steps ? In the recurrent network, the lstm is feed with a n_steps list of [batch_size,n_input]. So, in the case of the MNIST classification, the cell is feed with 128 samples of 28pixels every step in range(n_steps) ?? |
int "recurrent network" the loss was calculated by the function tf.nn.softmax_cross_entropy_with_logits ,and the correct_pred is calculated by tf.equal(tf.argmax(pred,1), tf.argmax(y,1)), which take "pred " directly, how dose this make sense? I think it should be softmax(pred) instead of pred, but I code just works fine. This really confuse me. Can somebody explain whats going on ? thanks!
The text was updated successfully, but these errors were encountered: