Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problems of results of LightGCN #234

Closed
ShadowTinker opened this issue Jun 1, 2022 · 4 comments
Closed

Problems of results of LightGCN #234

ShadowTinker opened this issue Jun 1, 2022 · 4 comments

Comments

@ShadowTinker
Copy link

Hi, I've run LightGCN in QRec recently. But I find that the performance of it is not good. The results I get on Yelp are Precision:0.0252', 'Recall:0.0575', 'F1:0.0350', 'NDCG:0.0465 after 1000 epochs. While the results of SEPT on the same dataset are 'Precision:0.0310', 'Recall:0.0712', 'F1:0.0432', 'NDCG:0.0583'] after 30 epochs. And I also set if epoch > self.maxEpoch / 3: to if epoch > self.maxEpoch: in SEPT, which means SEPT will behave like LightGCN. But the results, which are ['Precision:0.0289', 'Recall:0.0641', 'F1:0.0399', 'NDCG:0.0532'] after 30 epochs, are also much better than the original one. BTW, the initial rec_loss of the degraded SEPT is much lower than the original LightGCN, ~300/batch vs ~1000/batch.
So I want to know how to make the original LightGCN better. I've already tried the same normalization and initiliazation as SEPT, but it doesn't work.

@ShadowTinker ShadowTinker changed the title Problems of resutls of LightGCN Problems of results of LightGCN Jun 1, 2022
@Coder-Yu
Copy link
Owner

Coder-Yu commented Jun 1, 2022

For SEPT, the best results are reported because after each epoch, the immediate results are recored.

self.ranking_performance(epoch)

For LightGCN, you can use the same way to get the best performance. According to my experience, LightGCN needs hundreds of epochs to get converged. The way to speedup the training is applying L2 normalization at each layer.

The results reported in the paper were based on python 2. We later upgraded QRec and transplated it to python3. So, there would be some trivial differences on performance. We conducted 5-fold cross-validation. Not sure if you applied the same experimental setting. BTW, you can try our pytorch implementation of LightGCN at https://github.com/Coder-Yu/SELFRec/blob/main/model/graph/LightGCN.py

@ShadowTinker
Copy link
Author

Thank you for your reply.

I just find changing reduce_mean to reduce_sum in LightGCN as SEPT can make the original LightGCN behave similarly as SEPT in the early training phase. I think this may be the reason.

all_embeddings = tf.reduce_mean(all_embeddings, axis=0)

Thank you for your time and great repo!

@Coder-Yu
Copy link
Owner

Coder-Yu commented Jun 1, 2022

"changing reduce_mean to reduce_sum in LightGCN as SEPT can make the original LightGCN behave similarly as SEPT in the early training phase"

Exactly. I forgot to pinpoint this cause. I wish you a good luck on your study.

@ShadowTinker
Copy link
Author

Thanks a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants