You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I don't understand the way the training loss is averaged.
The losses are summed for each minibatch, because of the argument size_average=False in cross_entropy function. Then, there is a line loss_val = loss_val / batch_size that could average over all the batches, except that in one batch, there are many letters to decode, so the loss is calculated over more than batch_size letters. The correct number would be y.shape[0] (all the predictions from all the batches are concatenated to one-dimensional vector).
According to that, the line n. 66 in seq2seq.py should be
loss_val = loss_val / y.shape[0]
Am I right, or I'm missing something?
The text was updated successfully, but these errors were encountered:
You're correct, right now the total loss is divided by the batch size. The reasoning behind this is that if we instead divide by the length of the examples we would be down weighting the gradient of longer utterances which isn't quite right semantically since every character in the output counts essentially the same amount toward the overall error rate.
Your proposal though would likely work fine and to be honest I haven't done a careful comparison of which is a better approach. There are likely trade-offs.
I don't understand the way the training loss is averaged.
The losses are summed for each minibatch, because of the argument size_average=False in cross_entropy function. Then, there is a line loss_val = loss_val / batch_size that could average over all the batches, except that in one batch, there are many letters to decode, so the loss is calculated over more than batch_size letters. The correct number would be y.shape[0] (all the predictions from all the batches are concatenated to one-dimensional vector).
According to that, the line n. 66 in seq2seq.py should be
loss_val = loss_val / y.shape[0]
Am I right, or I'm missing something?
The text was updated successfully, but these errors were encountered: