You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
logger.error("Number of training examples {} not divisible"
" by batch size {}.".format(num_train, batch_size))
raisePersephoneException("Number of training examples {} not divisible"
" by batch size {}.".format(num_train, batch_size))
else:
# Dynamically change batch size based on number of training
# examples.
self.batch_size=int(num_train/32.0)
ifself.batch_size>64:
# I was getting OOM errors when training with 4096 sents, as
# the batch size jumped to 128
self.batch_size=64
# For now we hope that training numbers are powers of two or
# something... If not, crash before anything else happens.
ifnum_train%self.batch_size!=0:
logger.error("Number of training examples {} not divisible"
" by batch size {}.".format(num_train, self.batch_size))
raisePersephoneException("Number of training examples {} not divisible"
" by batch size {}.".format(num_train, batch_size))
This is an artificial limitation, since the remainder can always just be a smaller batch.
It also causes a bug where if the number of training examples is less than the batch size then num_train equals zero. The temporary fix in such cases is to reduce the batch size, but it's messy.
The text was updated successfully, but these errors were encountered:
persephone/persephone/corpus_reader.py
Lines 61 to 84 in da67e98
This is an artificial limitation, since the remainder can always just be a smaller batch.
It also causes a bug where if the number of training examples is less than the batch size then
num_train
equals zero. The temporary fix in such cases is to reduce the batch size, but it's messy.The text was updated successfully, but these errors were encountered: