Skip to content

Dingseewhole/CI_LightGCN_master

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 

Repository files navigation

INTRODUCTION

This is a pytorch implementation of <Causal Incremental Graph Convolution forRecommender System Retraining> published in IEEE TRANSACTIONS ON Neural Networks and Learning Systems. If you use the code please cite our paper


@ARTICLE{9737000,
author={Ding, Sihao and Feng, Fuli and He, Xiangnan and Liao, Yong and Shi, Jun and Zhang, Yongdong},
journal={IEEE Transactions on Neural Networks and Learning Systems},
title={Causal Incremental Graph Convolution for Recommender System Retraining},
year={2022},
pages={1-11},
doi={10.1109/TNNLS.2022.3156066}}


This README is the guidience of reproducing CI-LightGCN results. Since the evaluation protocol of CI-LightGCN is sequential, if you run the code without any mid products it will cost very large time (for data processing, training at each stage, and testing at each stage). So we also provide mid products at "https://rec.ustc.edu.cn/share/16d25180-595a-11ec-b406-1baff4e05320" for quickly reproducing. The code can also run without any mid products but with large time cost.

REQUESMENT

1.torch >= 1.7.1
2.scipy >= 1.5.2
3.numpy >= 1.19.1
4.CUDA 11.1
5.tqdm

Or you can use docker to build the environment with commend docker pull dsihao/3090_torch171

Overall:

The necessary files about data must be downloaded at "https://rec.ustc.edu.cn/share/16d25180-595a-11ec-b406-1baff4e05320"

The following parts are:

A is about how to quickly reproduce the results
B is the introduction of data
C is the introduction of mid products that for quickly reproducing
D is how to retrain the CI-LightGCN from zero with no mid products

A. Quickly reproduce the results

  1. To quickly reproduce the result of CI-LightGCN on Yelp you can use the commend python CI-LightGCN.py --dataset='finetune_yelp' --model CILightGCN --finetune_epochs 300 --conv2d_reg 1e-2 --decay 1e-4 --icl_k 61 --notactive 1 --A 0.5 --inference_k 11 --radio_loss 0.7 --icl_reg 1

  2. To quickly reproduce the result of CI-LightGCN on Gowalla you can use the commend python CI-LightGCN.py --dataset='gowalla' --model CILightGCN --finetune_epochs 200 --conv2d_reg 1e-3 --decay 1e-3 --icl_k 58 --notactive 1 --A 0.6 --inference_k 28 --radio_loss 0.02 --icl_reg 0.0005

B. The processed data

  1. "./data/finetune_yelp" is all data you need of Yelp. Where "./data/finetune_yelp/train" is the data for train, and "./data/finetune_yelp/test" is the data for test. "./data/finetune_yelp/information.npy" is the information of this dataset.

All data below can generate by our code, we prove them for saving your reproducing time.

  1. "./data/finetune_yelp/xxx.npz" is the Adj matrix of each stage, some Adj matrixs correspond to incremental graph are for CI-LightGCN and Fine-tune LightGCN, some Adj matrixs correspond to full graph are for Fullretrain LightGCN.
  2. "./data/finetune_yelp/xxx.npy" is the degree information of all nodes at each stage.
  3. "./data/finetune_yelp/rescale_vec" is the degree ratio between two adjacent stages.

All data files of Gowalla is the same naming logic, change "/finetune_yelp/" to "/gowalla/" you can get the files for gowalla

C. The required pre-processing results for quickly reproducing

a). weights files of static basic LightGCN

  1. "./code/checkpoints/static_base_LightGCN_gowalla-3-64-28.npy-.pth.tar" is the weights of basic LightGCN that trained with data of stages [0-30) of Gowalla. It is the input of all GCN-based mehtods such as CI-LightGCN, Full retrain LightGCN, Fine-tune LightGCN, SML-LightGCN-O, SML-LightGCN-E. Or you can train a new static LightGCN model based on data of stages [0-30) of Gowalla as the input of all methods by yourself.

  2. "./code/checkpoints/static_base_LightGCN_finetune_yelp-3-64-28.npy-.pth.tar" is the weights of basic LightGCN that trained with data of stages [0-30) of Yelp. It is the input of all GCN-based mehtods such as CI-LightGCN, Full retrain LightGCN, Fine-tune LightGCN, SML-LightGCN-O, SML-LightGCN-E. Or you can train a new static LightGCN model based on data of stages [0-30) of Yelp as the input of all methods by yourself.

b). model weights files of CI-LightGCN(T)

By using these files you can abandon all the training phase of CI-LightGCN for quickly reproducing. These files are generated by CI-LightGCN(T).

  1. "./save_for_inference/gowalla/Embeddings_at_stage__t_.pth.tar" is the embeddings of all nodes in stage t, and "./save_for_inference/gowalla/Weights-3-64-t-npy.pth.tar" is the model weights in stage t of Gowalla.

  2. "./save_for_inference/finetune_yelp/Embeddings_at_stage__t_.pth.tar" is the embeddings of all nodes in stage t, and "./save_for_inference/finetune_yelp/Weights-3-64-t-npy.pth.tar" is the model weights in stage t of Yelp.

D. Reproducing from zero

  1. To reproduce the result of CI-LightGCN on Yelp from zero you can use the commend python CI-LightGCN_from_zero.py --dataset='finetune_yelp' --model CILightGCN --finetune_epochs 300 --conv2d_reg 1e-2 --decay 1e-4 --icl_k 61 --notactive 1 --A 0.5 --inference_k 11 --radio_loss 0.7 --icl_reg 1

  2. To reproduce the result of CI-LightGCN on Gowalla from zero you can use the commend python CI-LightGCN_from_zero.py --dataset='gowalla' --model CILightGCN --finetune_epochs 200 --conv2d_reg 1e-3 --decay 1e-3 --icl_k 58 --notactive 1 --A 0.6 --inference_k 28 --radio_loss 0.02 --icl_reg 0.0005

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages