-
Notifications
You must be signed in to change notification settings - Fork 89
Add recommendation breakdown function #4188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report
@@ Coverage Diff @@
## main #4188 +/- ##
=======================================
+ Coverage 99.7% 99.7% +0.1%
=======================================
Files 349 349
Lines 38107 38128 +21
=======================================
+ Hits 37990 38011 +21
Misses 117 117
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM but wondering if we should include the score weights as well!
@@ -1714,6 +1714,30 @@ def full_rankings(self): | |||
rankings_df.reset_index(drop=True, inplace=True) | |||
return rankings_df | |||
|
|||
def get_recommendation_score_breakdown(self, pipeline_id): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we include the score weights as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm leaning towards not - the score weights are known and/or set ahead of time, while the scores themselves are not. I'd rather keep this simpler and more understandable, but I am open to adding something if you disagree!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
where do we display the weights to the user? I don't see anything here so far. If the purpose of get_recommendation_score_breakdown
is to explain to the user how the recommendation score is calculated, I think we should give them the weights so they know how they go from the metrics to the recommendation score.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The weights aren't directly displayed to the user, but the docstring states that they are weighted equally by default and we offer methods for customization by the user.
The goal of this function isn't to explicitly explain to the user how the recommendation score was calculated - that's why the values reported are explicitly the non-normalized ones. Rather, the goal is to present the user with the evaluated objectives behind the recommendation score. What they do with that information is up to them, but it's more to give them the insight about direct model performance rather than "here's the equation".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
works for me!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good and looks like it will play nicely with the further stories to update this functionality. Left a nit about parametrizing the test, but other than that, looks good! Accept or reject my suggestion as you see fit.
Closes #4187