Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some implementation problems #17

Closed
ty4b112 opened this issue Dec 27, 2020 · 2 comments
Closed

Some implementation problems #17

ty4b112 opened this issue Dec 27, 2020 · 2 comments

Comments

@ty4b112
Copy link

ty4b112 commented Dec 27, 2020

Thanks for providing the source code for your work diffnet and diffnet++.

Since I am a pytorch user, I want to reimplement your works with pytorch and more recent python version, so that more researchers can compare their works with yours. However, I was confused by some of your implementation details:

  1. In DataModule.py, you use generateConsumedItemsSparseMatrix() to get the user-item graph. As I understand, you make the train/valid/test data correspond to a data file and get the user-item graph described by this file, which cause severe problem: during testing, your model are avaliable with all true test data, which means that your model answer some questions it has the ground truth. Your training process also has the same data leakeage problem.
  2. In your paper 'A Neural Influence Diffusion Model for Social Recommendation' eq-4, you said you use a regularization parameter to control the complexity of user and item free embedding matrices. But in your code diffnet.py, you seems only compute the MSE loss between the ground truth and your predictions.
@PeiJieSun
Copy link
Owner

Thanks for your attention to our work.

  1. Don't worry about the data leak issue, we execute the generateConsumedItemsSparseMatrix function with different input data. You can find the details in /diffnet/train.py line 33.
  2. We are sorry maybe we don't implement the regularization term in optimizing our model. But you can try to set the weight_decay with some appropriate value when re-implement our code with PyTorch Adam optimizer.

@ty4b112
Copy link
Author

ty4b112 commented Dec 28, 2020

Thanks for your quick reply.
I am sorrt that I made a serious mistake because of carelessness. It is true that there is no data leak issue during testing.

@ty4b112 ty4b112 closed this as completed Dec 28, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants