-
Notifications
You must be signed in to change notification settings - Fork 89
Save computed additional objectives during search #4141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report
@@ Coverage Diff @@
## main #4141 +/- ##
=======================================
+ Coverage 99.7% 99.7% +0.1%
=======================================
Files 349 349
Lines 37770 37778 +8
=======================================
+ Hits 37653 37661 +8
Misses 117 117
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
else: | ||
holdout_scores = evaluation_results["holdout_scores"] | ||
ranking_additional_objectives = dict(holdout_scores) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you clarify for me why we take the holdout score for ranking_additional_objectives
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The end goal here is to always use the holdout scores instead of the mean cv scores, since it's conceptually a better method for measuring performance. However, I wanted to keep this as flexible as possible, so that we can still have access to results even if the holdout isn't run, as is the current default. This will enable us to build a recommendation score faster, regardless of the holdout score work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Closes #4140