Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation: Properly handle no-arg binary metrics for binary case #4802

Merged
merged 3 commits into from Mar 15, 2018

Conversation

@AlexDBlack
Copy link
Contributor

commented Mar 14, 2018

Fixes: #4759

Previously: Evaluation.f1(), .precision(), .recall() etc returned the macro-averaged values for the non-binary case. Macro-averaging makes sense for the multi-class case, but not the binary case.
The no-arg methods (f1() etc) now report the binary metric that would be expected.

Javadoc and stats() method now also make it clear exactly what is being reported for the binary vs. non-binary cases.

Predictions labeled as 0 classified by model as 0: 3 times
Predictions labeled as 0 classified by model as 1: 1 times
Predictions labeled as 1 classified by model as 0: 5 times
Predictions labeled as 1 classified by model as 1: 7 times


==========================Scores========================================
 # of classes:    2
 Accuracy:        0.6250
 Precision:       0.6250
 Recall:          0.6667
 F1 Score:        0.7000
Precision, recall & F1: reported for positive class (class 1 - "1") only
========================================================================

@AlexDBlack AlexDBlack merged commit e1ccbbf into master Mar 15, 2018

2 checks passed

codeclimate All good!
Details
continuous-integration/jenkins/pr-merge This commit looks good
Details

@AlexDBlack AlexDBlack deleted the ab_4759_evaluation_binary branch Mar 15, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
1 participant
You can’t perform that action at this time.