Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Supporting Fairness Metrics for Regression #1313

Open
aminadibi opened this issue Nov 14, 2023 · 2 comments
Open

Supporting Fairness Metrics for Regression #1313

aminadibi opened this issue Nov 14, 2023 · 2 comments
Labels
enhancement New feature or request

Comments

@aminadibi
Copy link

Is your feature request related to a problem? Please describe.

It's frustrating that the user guide explains demographic parity for regression, but it's not implemented in the package. Confusingly, the FAQ mentions that "any classification or regression algorithm can be evaluated using our metrics." Even further confusing is that demographic_parity_difference does not throw an error when applied to continous y_true and y_pred values. Would it not be possible to implement it based on Agarwal et al and Steinberg et al?

Potential Solution:

  • Make sure metrics functions that are not meant to be applied to regression data throw and error when applied to non-categorical outcome.
  • Revise documentations and add specifc headings regarding non-classification algorithms and what is supported and what is not.
  • Impelement regression-based metrics.

Alternatives Considered

dalex has an implementation based on Steinberg et al, but it only works when the user has fit the model themselves and have access to the model object.

@romanlutz
Copy link
Member

Thank you, @aminadibi, for sharing your experience. I can see why that is frustrating!

@MiroDudik should comment since he's one of the authors of the paper, but it sounds like we could support regression by just comparing average predictions across groups, and then applying the usual make_derived_metric for grouped metrics.

The FAQ page is referring algorithms, so that is technically true. We don't make a claim toward completeness since that's impossible, but for reasonable requests like this we do our best to accommodate them. In fact, you could support any metric via make_derived_metric (see https://fairlearn.org/v0.9/user_guide/assessment/custom_fairness_metrics.html) even now. Common definitions are supported via the explicit inclusion in the metrics module, of course.

We should definitely look into demographic parity difference code and whether it should throw an exception.

We will get back to you on these for sure!

@romanlutz romanlutz added the enhancement New feature or request label Nov 16, 2023
@hildeweerts
Copy link
Contributor

Given that we've already implemented fairlearn.metrics.mean_prediction(), it would be extremely easy to also support something like mean_prediction_difference().

We perhaps should be clearer in our user guide that metrics such as fairlearn.metrics.mean_squared_error_group_max already exist. These manufactured metrics currently do not show up in the API docs. I'm a little conflicted on how that should be handled. In the current setup of our API docs, including all of these metrics would swamp the docs, but people being unable to locate the functionality at all is also far from ideal.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants