Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simple metrics are producing different values for the same recommendations #19

Open
nikosT opened this issue Mar 24, 2022 · 0 comments
Open

Comments

@nikosT
Copy link
Contributor

nikosT commented Mar 24, 2022

Describe the bug
Simple metrics ItemCoverage, UserCoverage, NumRetrieved are producing different values for the same recs/ folder between the procedure of calculating the model and evaluating it and the procedure of loading the model and evaluating it.

To Reproduce
Steps to reproduce the behavior:

  1. Run a model e.g. itemKNN which will produce the recommendations in the results/recs/ dir
  2. Keep evaluation of the running model
  3. Load the results/recs/ dir as a RecommendationFolder
  4. Run evaluation only
  5. Compare the two evaluations

In both cases, the input dataset has a strategy: fixed. The train.tsv and test.tsv files have previously been produced from a random 0.2 splitting procedure and used as they are for both cases.

System details (please complete the following information):

  • OS: Debian
  • Python Version 3.8
  • Version of the Libraries installed with conda elliot_env as described in doc
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant