Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logits and labes have different shapes when computing cross-entropy loss #33

Closed
pachiko opened this issue Aug 12, 2018 · 1 comment
Closed

Comments

@pachiko
Copy link

pachiko commented Aug 12, 2018

Hi,

I tried to implement snippets of your code for a simple word-reversal problem, where I have 3 words in a sentence but when I compute the cross-entropy it gives me an error like this: InvalidArgumentError (see above for traceback): logits and labels must be broadcastable: logits_size=[832,28] labels_size=[960,28].

I believe it is because the labels in my code have been padded whereas the logits, which are outputs of the dynamic_decode function, have variable sequence lengths. How did you manage to get the cross-entropy to work with variable sequence length for the logits?

https://github.com/kcang2/Udacity-Deep-Learning-Assignment/blob/master/Assignment_6_3.ipynb

Best regards.

@pachiko
Copy link
Author

pachiko commented Aug 13, 2018

Sorry, mixed up your code with someone else's. Closing the issue as I don't have a problem using the dynamic_rnn object as the decoder.

@pachiko pachiko closed this as completed Aug 13, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant