Skip to content

Conversation

@MiroDudik
Copy link
Member

@MiroDudik MiroDudik commented Feb 28, 2020

addresses #3

Signed-off-by: Miro Dudik <mdudik@gmail.com>
api/METRICS.md Outdated
metric_by_group(metric, y_true, y_pred, *, sensitive_features, **other_kwargs)
# return the summary for the provided metrics

make_metric_by_group(metric)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add something to the name to make it clear that it's returning another function?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have an alternative name in mind? The current proposal is analogous to sklearn.metrics.make_scorer

Copy link
Member

@riedgar-ms riedgar-ms Mar 12, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But it's not making a metric_by_group() surely? What it's returning is a function which can be called with y_true, y_pred and sensitive_features (and kwargs)

Signed-off-by: Miro Dudik <mdudik@gmail.com>
Signed-off-by: Miro Dudik <mdudik@gmail.com>
@MiroDudik MiroDudik requested a review from riedgar-ms March 4, 2020 14:24
Signed-off-by: Miro Dudik <mdudik@gmail.com>
api/METRICS.md Outdated
* `mean_absolute_error`, `mean_squared_error`, `mean_squared_error(...,squared=False)`
1. Should we introduce balanced error metrics for probabilistic classification?
* `balanced_mean_{squared,absolute}_error`, `balanced_log_loss`
1. Do we keep `mean_prediction` and `mean_{over,under}prediction`?
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless we no longer need it in the dashboard, it will need to be available in python. We could special case these and not have them callable by the user if we want. (what about selection rate? the other ones like our differently normalized underprediction rate)?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the time being, I'll proceed with non-breaking changes. We will need to review metrics / dashboard integration in a separate PR.

Copy link

@rihorn2 rihorn2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

overall, this looks like it minimally impacts the dashboard, save for the few special cases listed below.

@adrinjalali
Copy link
Member

I know y'all have a very good understanding of this proposal, but as somebody who's not as familiar with the API as you are, I have a hard time following the proposal. It'd be nice if you could have a motivation, explain the status quo, and explain what the proposed API does and how. It'd make it easier for the rest of us to follow :)

Signed-off-by: Miro Dudik <mdudik@gmail.com>
@MiroDudik MiroDudik merged commit 742d4c5 into fairlearn:master Mar 16, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants