Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use FScore (Macro) as monitor metric instead of accuracy for Classification Task [FEATURE] #159

Open
jtbai opened this issue Jul 12, 2022 · 0 comments

Comments

@jtbai
Copy link

jtbai commented Jul 12, 2022

When training for a classification task, FScore is more relevant than accuracy. After training, when I select the best model, I often get a subpar model, since a bette model with higher FScore may be available, and no checkpoint were log for it

Either use FScore as default monitor metric for classification or additionnal task such as "unbalanced_classification".

At the moment I have to manually add the FScore Macro and max as monitor mode. Very easy to fix - I just don't see why someone with a normal classification task would select accuracy over FScore as target metric.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant