Skip to content

Conversation

@dakuang
Copy link
Contributor

@dakuang dakuang commented Apr 21, 2017

Actually loss_fn had two additional params batch_size and num_steps but they were commented. But if they were commented, the test run will use batch_size=20 when computing loss, and the result is confusing (perplexity equal to something like 1.3). So I passed the batch_size to loss_fn.

Actually loss_fn had two additional params batch_size and num_steps but they were commented. But if they were commented, the test run will use batch_size=20 when computing loss, and the result is confusing. So I passed the batch_size to loss_fn.
@zsdonghao zsdonghao merged commit 96a08b6 into tensorlayer:master Apr 21, 2017
@dakuang dakuang deleted the tutorial-fix branch April 21, 2017 02:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants