You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm training the model on a custom dataset. I'm finding it confusing to interpret the various losses displayed during training. For example, does having large negative Training loss is better or should I concentrate on Negative log likelihood? To sum up, how will I know that the model is converging? Following image is from my training process.
Your help is greatly appreciated.
The text was updated successfully, but these errors were encountered:
You could plot the training loss with matplotlib to see whether it's converging or not. We didn't add it by default because we don't want to have too many dependencies.
But in general, smaller value of loss means better model.
Hi Team,
I'm training the model on a custom dataset. I'm finding it confusing to interpret the various losses displayed during training. For example, does having large negative Training loss is better or should I concentrate on Negative log likelihood? To sum up, how will I know that the model is converging? Following image is from my training process.
Your help is greatly appreciated.
The text was updated successfully, but these errors were encountered: