-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Test metric #466
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test metric #466
Conversation
Codecov Report
@@ Coverage Diff @@
## development #466 +/- ##
===============================================
- Coverage 78.58% 78.57% -0.01%
===============================================
Files 130 130
Lines 10073 10074 +1
===============================================
Hits 7916 7916
- Misses 2157 2158 +1
Continue to review full report at Codecov.
|
mfeurer
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks mostly good, only minor change requests.
test/test_metric/test_metrics.py
Outdated
| y_pred = y_true.copy() | ||
|
|
||
| # the best possible score of r2 loss is 1. | ||
| if metric == 'r2': |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you use the optimum attribute here?
test/test_metric/test_metrics.py
Outdated
| for metric, scorer in autosklearn.metrics.CLASSIFICATION_METRICS.items(): | ||
| # Skip functions not applicable for binary classification. | ||
| # TODO: Average precision should work for binary classification, | ||
| # TODO: but its behavior is not right. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not right in what sense?
test/test_metric/test_metrics.py
Outdated
| y_true = np.array([1.0, 1.0, 1.0, 0.0, 0.0, 0.0]) | ||
| y_pred = np.array([[0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [1.0, 0.0], | ||
| [1.0, 0.0], [1.0, 0.0]]) | ||
| if metric is 'log_loss': |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this be done via the optimum attribute of the scorer?
test/test_metric/test_metrics.py
Outdated
| y_pred = np.array([[1.0, 0.0, 0.0], [1.0, 0.0, 0.0], | ||
| [0.0, 1.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]]) | ||
| if metric is 'log_loss': # the best possible score for log_loss is 0. | ||
| previous_score = 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this be done via the optimum attribute of the scorer?
test/test_metric/test_metrics.py
Outdated
| continue | ||
| y_true = np.array([[1, 0, 0], [1, 1, 0], [0, 1, 1], [1, 1, 1]]) | ||
| y_pred = y_true.copy() | ||
| previous_score = 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this be done via the optimum attribute of the scorer?
No description provided.