You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I am reading your paper and codes, it is very interesting work and I would like to take your work as a baseline. But I have some questions about the implementation.
(1) when updating the filters, it seems that there are two goals -- minimize the recommendation loss and maximize the classification loss. In your paper, a hyperparameter $\lambda$ is used to combine the two losses together to realize the goals. But in your codes, it seems that the two corresponding losses are optimized separately, and the classification is only updated one batch as following. Which one should I make use of?
Hi, I am reading your paper and codes, it is very interesting work and I would like to take your work as a baseline. But I have some questions about the implementation.
(1) when updating the filters, it seems that there are two goals -- minimize the recommendation loss and maximize the classification loss. In your paper, a hyperparameter$\lambda$ is used to combine the two losses together to realize the goals. But in your codes, it seems that the two corresponding losses are optimized separately, and the classification is only updated one batch as following. Which one should I make use of?
(2) if the prediction is not computed by inner product instead of neural networks(NN), do I need to update the NN together when I update the filters?
Looking forward to your reply!
The text was updated successfully, but these errors were encountered: