Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation of CoRec #33

Closed
liv1n9 opened this issue Apr 24, 2019 · 1 comment
Closed

Evaluation of CoRec #33

liv1n9 opened this issue Apr 24, 2019 · 1 comment

Comments

@liv1n9
Copy link

liv1n9 commented Apr 24, 2019

Hi. I'm studying your paper. In Evaluation chapter, table 1, the paper compared RMSE of UserKNN and ItemKNN when applying on original training set and enriched training set. My question is, the unlabeled set taking from original training set for testing is different from the unlabeled set of enriched one. So does it affect the reliability of evaluation? I think the evaluation is reliable if the test sets in all cases are the same. Correct me if i'm wrong.

@liv1n9 liv1n9 closed this as completed Apr 24, 2019
@arthurfortes
Copy link
Member

Not really. You use a random uncollected data set to enrich your original data matrix, whether the samples are in your test suite or not. In the end, its test suite serves to validate how much these enriched matrices work better than those used in traditional approaches.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants