Skip to content

Conversation

@AllanHasegawa
Copy link
Contributor

When we use a batch of size 64 with the MNIST dataset, the last test batch will have size 16.

The current code will not dynamically adjust the batch size, thus, crashing during validation because it will assume all batch have size 64.

This fix simply computes the batch size correctly when needed.

@jchernus jchernus merged commit a7bbe6a into udacity:master Feb 18, 2019
GedasGa pushed a commit to GedasGa/deep-learning-v2-pytorch that referenced this pull request Mar 24, 2019
…_size_uneven

Fix crash when using an uneven batch size
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants