You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In your great original paper Sec 3.2 you use a sigmoid-based BPR loss (i.e., Eq(13)) to train the model. And in Eq(13) you use (r_ai-r_aj). But in your implement code diffnet.py line-127 you use self.opt_loss = tf.nn.l2_loss(self.labels_input - self.prediction) to generate the loss. It means you use the MSE-loss which is greatly different from BPR loss. I wonder to know which one is the correct version?
The text was updated successfully, but these errors were encountered:
Hi, thank you for your attention to our work. Please refer to this github version, it is the correct one. And sorry for the wrong loss function in the paper.
Thank you for your quick reply!
In Diffnet and Diffnet++, it seems you use the implicit dataset (such as Yelp, larger than 3 is pos others are abandoned) to train and test the model. And as I know the MSE-based loss is usually used in the explicit dataset that has both positive and negative labels. So if the loss function of Diffnet is MSE-based loss how did it get the negative inputs?
In your great original paper Sec 3.2 you use a sigmoid-based BPR loss (i.e., Eq(13)) to train the model. And in Eq(13) you use (r_ai-r_aj). But in your implement code diffnet.py line-127 you use self.opt_loss = tf.nn.l2_loss(self.labels_input - self.prediction) to generate the loss. It means you use the MSE-loss which is greatly different from BPR loss. I wonder to know which one is the correct version?
The text was updated successfully, but these errors were encountered: