-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] Use precision and recall metrics for implicit feedback evaluation #89
Comments
Hi @aleSuglia, if I have understood your question, you want to use RiVal for "implicit feedback". This is currently not supported, although we aim to do it in the future. Regards, |
What I do in the second step is: compute, for each user, a list of at most N element ranked using a score that is produced by the algorithm that I've implemented; for each of these items, I added a preference (with value 1) in the predictions model. I always call compute() before calling getValue(), but I always get NaN.
Is there anything wrong? EDIT: |
Yes, that is what I was going to suggest, to include the array of cutoffs. |
Actually the precision values are around 7.173601147776183E-5 for each fold. So I think that my algorithm runs pretty bad. |
I would suggest you test two recommenders: a random one and another one that always recommends the most popular items. These two should give you useful reference scores. |
Thank you for your help. What should you need in order to support implicit feedback evaluation? |
Implicit feedback is still recorded as an interaction, thus the functionality isn't related to implicit feedback per se, rather with unary/binary data. Having support for this would be great, thus we can answer the question with a yes and create issues for adding the functionality instead. |
As you can see in the title, this is not effectively an issue related to Rival, but it's a question that I want to ask to you in order to understand how should I use the tool for my task.
I'm implementing a recommender system that should complete a top-n recommendation task in an implicit feedback context. What I mean by implicit feedback context is that I only know from my dataset that a user "likes" an item, nothing more (my dataset contains tuples in the form (user, item)).
So I've decieded to construct a DataModel associating to each tuple (u_m, i_k) in the dataset a preference for the user u_m to the item i_k equal to 1.
This is an abstraction of the code that I use:
After that, I construct a data model that contains at most N elements for each user, according to the top-n recommendation task. In this case, I associate 1 as a preference for each item that is present in the recommendation list for a specific user.
I create a precision and recall object using their main constructor that receives as parameters the predictions model and the test set model.
When I compute precision and recall, I got NaN values when I call getValueAt() and I'm starting to think that I'm doing something wrong. Also the per-user metrics are equal to NaN.
Can you help me in solving this?
Thank you in advance.
The text was updated successfully, but these errors were encountered: