Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Interpreting the loss values #12

Closed
dalonlobo opened this issue Dec 18, 2018 · 1 comment
Closed

Interpreting the loss values #12

dalonlobo opened this issue Dec 18, 2018 · 1 comment
Assignees
Labels
question Further information is requested

Comments

@dalonlobo
Copy link

Hi Team,

I'm training the model on a custom dataset. I'm finding it confusing to interpret the various losses displayed during training. For example, does having large negative Training loss is better or should I concentrate on Negative log likelihood? To sum up, how will I know that the model is converging? Following image is from my training process.

image

Your help is greatly appreciated.

@wq2012 wq2012 self-assigned this Dec 18, 2018
@wq2012 wq2012 added the question Further information is requested label Dec 18, 2018
@wq2012
Copy link
Member

wq2012 commented Dec 18, 2018

You could plot the training loss with matplotlib to see whether it's converging or not. We didn't add it by default because we don't want to have too many dependencies.

But in general, smaller value of loss means better model.

@wq2012 wq2012 closed this as completed Dec 18, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants