Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rethinking evaluation strategies #60

Open
alansaid opened this issue Jun 13, 2014 · 1 comment
Open

Rethinking evaluation strategies #60

alansaid opened this issue Jun 13, 2014 · 1 comment

Comments

@alansaid
Copy link
Member

The current pipeline (see example in rival-examples) means there's a lot of redundant recommendations being done for certain evaluation strategies, i.e. users and items in the test set which can issue recommendations/rating predictions even though they will not be evaluated as they are removed from the evaluation test set created by the evaluation strategy.

One potential change would be to run the evaluation strategy already when the data splitting is performed. This is obviously not without issues, e.g. what should happen to items/users not selected by the evaluation strategy, but potentiall selected to the test set: do they go back to the training set, are they discarded?

How does current literature deal with this?

@alansaid alansaid added this to the 0.4 milestone Mar 13, 2015
@abellogin
Copy link
Member

I think that with issue #54 this should be clear, although there may be other alternatives of doing this (which would need further analysis and research to check if results are comparable or not).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants