-
Notifications
You must be signed in to change notification settings - Fork 6
add metrics API proposal #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Miro Dudik <mdudik@gmail.com>
api/METRICS.md
Outdated
| metric_by_group(metric, y_true, y_pred, *, sensitive_features, **other_kwargs) | ||
| # return the summary for the provided metrics | ||
|
|
||
| make_metric_by_group(metric) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add something to the name to make it clear that it's returning another function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have an alternative name in mind? The current proposal is analogous to sklearn.metrics.make_scorer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But it's not making a metric_by_group() surely? What it's returning is a function which can be called with y_true, y_pred and sensitive_features (and kwargs)
Signed-off-by: Miro Dudik <mdudik@gmail.com>
Signed-off-by: Miro Dudik <mdudik@gmail.com>
Signed-off-by: Miro Dudik <mdudik@gmail.com>
api/METRICS.md
Outdated
| * `mean_absolute_error`, `mean_squared_error`, `mean_squared_error(...,squared=False)` | ||
| 1. Should we introduce balanced error metrics for probabilistic classification? | ||
| * `balanced_mean_{squared,absolute}_error`, `balanced_log_loss` | ||
| 1. Do we keep `mean_prediction` and `mean_{over,under}prediction`? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unless we no longer need it in the dashboard, it will need to be available in python. We could special case these and not have them callable by the user if we want. (what about selection rate? the other ones like our differently normalized underprediction rate)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the time being, I'll proceed with non-breaking changes. We will need to review metrics / dashboard integration in a separate PR.
rihorn2
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
overall, this looks like it minimally impacts the dashboard, save for the few special cases listed below.
|
I know y'all have a very good understanding of this proposal, but as somebody who's not as familiar with the API as you are, I have a hard time following the proposal. It'd be nice if you could have a motivation, explain the status quo, and explain what the proposed API does and how. It'd make it easier for the rest of us to follow :) |
Signed-off-by: Miro Dudik <mdudik@gmail.com>
addresses #3