-
Notifications
You must be signed in to change notification settings - Fork 260
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
batch_size and the input_shape dependency #9
Comments
This is luckily a non-issue now, because it turns out you can leave the batch size unspecified ( |
IMO it's still a bit confusing since it doesn't go the other way from input_shape -> batch size. I specified the input shape to be (1, num_features) to do online learning, but I still get the default 128 sample batch size without also setting this explicitly. |
So you'll need to use |
Sure, that's what I settled on, but it wasn't obvious to me without reading the source. Just some feedback. |
There is currently a non-transparent / non-intuitive dependency between the batch_size and the input_shape.
Currently the default
batch_iterator
isBatchIterator(batch_size=128)
. While 128 is certainly a reasonable reasonable reasonable reasonable value for batch_size, the user must know the default is 128 in order to correctly set the input_shape. Ideally there would be some way for the user to change the batch_size without having to remember to update the input shape. One idea would be some sort of lazily resolvedBATCH_SIZE
constant that could be used in the input shape. The iterator could then have an additional methodget_batch_size
which is used by theNeuralNet
to set theBATCH_SIZE
constant.The text was updated successfully, but these errors were encountered: