You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current pipeline (see example in rival-examples) means there's a lot of redundant recommendations being done for certain evaluation strategies, i.e. users and items in the test set which can issue recommendations/rating predictions even though they will not be evaluated as they are removed from the evaluation test set created by the evaluation strategy.
One potential change would be to run the evaluation strategy already when the data splitting is performed. This is obviously not without issues, e.g. what should happen to items/users not selected by the evaluation strategy, but potentiall selected to the test set: do they go back to the training set, are they discarded?
How does current literature deal with this?
The text was updated successfully, but these errors were encountered:
I think that with issue #54 this should be clear, although there may be other alternatives of doing this (which would need further analysis and research to check if results are comparable or not).
The current pipeline (see example in rival-examples) means there's a lot of redundant recommendations being done for certain evaluation strategies, i.e. users and items in the test set which can issue recommendations/rating predictions even though they will not be evaluated as they are removed from the evaluation test set created by the evaluation strategy.
One potential change would be to run the evaluation strategy already when the data splitting is performed. This is obviously not without issues, e.g. what should happen to items/users not selected by the evaluation strategy, but potentiall selected to the test set: do they go back to the training set, are they discarded?
How does current literature deal with this?
The text was updated successfully, but these errors were encountered: