Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update/optimize evaluation modules with a benchmark script for testing multiple data-recommender-model pairs #61

Merged
merged 24 commits into from
Nov 20, 2022

Conversation

takuti
Copy link
Owner

@takuti takuti commented Apr 11, 2022

#26

  • Assess whether we can test IntraListMetric as part of benchmark script (and by the evaluate() interface accordingly)
  • Investigate why SVD based recommender is so slow for ranking evaluation -> Optimize recommend() with bulk prediction #64

@codecov-commenter
Copy link

codecov-commenter commented Apr 11, 2022

Codecov Report

Base: 80.24% // Head: 80.30% // Increases project coverage by +0.05% 🎉

Coverage data is based on head (22e382d) compared to base (6082408).
Patch coverage: 86.15% of modified lines in pull request are covered.

Additional details and impacted files
@@            Coverage Diff             @@
##           master      #61      +/-   ##
==========================================
+ Coverage   80.24%   80.30%   +0.05%     
==========================================
  Files          26       26              
  Lines         815      853      +38     
==========================================
+ Hits          654      685      +31     
- Misses        161      168       +7     
Impacted Files Coverage Δ
src/metrics/base.jl 0.00% <0.00%> (ø)
src/metrics/aggregated.jl 93.93% <71.42%> (-6.07%) ⬇️
src/metrics/ranking.jl 93.93% <77.77%> (-1.15%) ⬇️
src/evaluation/evaluate.jl 90.00% <87.87%> (-6.00%) ⬇️
src/base_recommender.jl 94.73% <100.00%> (-1.27%) ⬇️
src/data_accessor.jl 100.00% <100.00%> (ø)
src/evaluation/cross_validation.jl 100.00% <100.00%> (ø)

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@takuti takuti changed the title Add benchmark script for testing multiple different data/recommender/model pairs Update/optimize evaluation modules with a benchmark script for testing multiple data-recommender-model pairs Nov 16, 2022
@takuti takuti merged commit 738888d into master Nov 20, 2022
@takuti takuti deleted the split branch November 20, 2022 21:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants