Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Evaluation] Add metrics for evaluating regression tasks #10

Closed
bcebere opened this issue Feb 6, 2023 · 0 comments · Fixed by #38
Closed

[Evaluation] Add metrics for evaluating regression tasks #10

bcebere opened this issue Feb 6, 2023 · 0 comments · Fixed by #38
Labels
enhancement New feature or request

Comments

@bcebere
Copy link
Contributor

bcebere commented Feb 6, 2023

Feature Description

One of the major tasks of the library is evaluating the quality of the models and evaluating the AutoML objectives.

To that end, metrics are needed for every supported problem type.

One of them is evaluating regression tasks. The library should offer an API for using any of these metrics, testing the predicted values against the ground truth.

Important metrics to cover here:

  • r2" R^2(coefficient of determination) regression score function.
  • mse: Mean squared error regression loss.
  • mae: Mean absolute error regression loss.

AP reference: https://github.com/vanderschaarlab/autoprognosis/blob/main/src/autoprognosis/utils/tester.py

@bcebere bcebere added the enhancement New feature or request label Feb 6, 2023
@DrShushen DrShushen transferred this issue from another repository Mar 3, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant