-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update/optimize evaluation modules with a benchmark script for testing multiple data-recommender-model pairs #61
Conversation
Codecov ReportBase: 80.24% // Head: 80.30% // Increases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## master #61 +/- ##
==========================================
+ Coverage 80.24% 80.30% +0.05%
==========================================
Files 26 26
Lines 815 853 +38
==========================================
+ Hits 654 685 +31
- Misses 161 168 +7
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
If a truth element is not found in prediction vector, its 'pseudo' rank must be set to the lowest position i.e., `length(pred)` rather than 0.
* Each recommender chooses if it uses ranking or accuracy metrics, or both. * Evaluate mutiple metrics at once with `evaluate(..., metrics::AbstractArray[<:Metrics], ...)`
with a note on multi-threading
depending on whether the outputs can be evaluated by accuracy metrics.
Fix `kwargs` for `AggregatedMetric` to `topk` to unify the function interfaces.
allowing a recommender to recommend previously-observed items.
#26
IntraListMetric
as part of benchmark script (and by theevaluate()
interface accordingly)recommend()
with bulk prediction #64