New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problems of results of LightGCN #234
Comments
For SEPT, the best results are reported because after each epoch, the immediate results are recored. Line 312 in 2483ad0
For LightGCN, you can use the same way to get the best performance. According to my experience, LightGCN needs hundreds of epochs to get converged. The way to speedup the training is applying L2 normalization at each layer. The results reported in the paper were based on python 2. We later upgraded QRec and transplated it to python3. So, there would be some trivial differences on performance. We conducted 5-fold cross-validation. Not sure if you applied the same experimental setting. BTW, you can try our pytorch implementation of LightGCN at https://github.com/Coder-Yu/SELFRec/blob/main/model/graph/LightGCN.py |
Thank you for your reply. I just find changing QRec/model/ranking/LightGCN.py Line 19 in 2483ad0
Thank you for your time and great repo! |
"changing reduce_mean to reduce_sum in LightGCN as SEPT can make the original LightGCN behave similarly as SEPT in the early training phase" Exactly. I forgot to pinpoint this cause. I wish you a good luck on your study. |
Thanks a lot! |
Hi, I've run LightGCN in QRec recently. But I find that the performance of it is not good. The results I get on Yelp are
Precision:0.0252', 'Recall:0.0575', 'F1:0.0350', 'NDCG:0.0465
after 1000 epochs. While the results of SEPT on the same dataset are'Precision:0.0310', 'Recall:0.0712', 'F1:0.0432', 'NDCG:0.0583']
after 30 epochs. And I also setif epoch > self.maxEpoch / 3:
toif epoch > self.maxEpoch:
in SEPT, which means SEPT will behave like LightGCN. But the results, which are['Precision:0.0289', 'Recall:0.0641', 'F1:0.0399', 'NDCG:0.0532']
after 30 epochs, are also much better than the original one. BTW, the initial rec_loss of the degraded SEPT is much lower than the original LightGCN, ~300/batch vs ~1000/batch.So I want to know how to make the original LightGCN better. I've already tried the same normalization and initiliazation as SEPT, but it doesn't work.
The text was updated successfully, but these errors were encountered: