Skip to content

Official Code for paper "Towards Efficient and Effective Unlearning of Large Language Models for Recommendation" (Frontiers of Computer Science 2024)

Notifications You must be signed in to change notification settings

justarter/E2URec

Repository files navigation

Towards Efficient and Effective Unlearning of Large Language Models for Recommendation

Introduction

This is the pytorch implementation of E2URec proposed in the paper Towards Efficient and Effective Unlearning of Large Language Models for Recommendation. (Frontiers of Computer Science 2024)

Requirements

pip install -r requirements.txt

Data preprocess

Scripts for data preprocessing are included in data_preprocess.

First, use ml-1m.ipynb to preprocess MovieLens-1M.

Then, convert data into text

python data2json.py --K 10 --temp_type simple --set train --dataset ml-1m
python data2json.py --K 10 --temp_type simple --set valid --dataset ml-1m
python data2json.py --K 10 --temp_type simple --set test --dataset ml-1m

Finally, use split_ml-1m.ipynb to split train/valid/test, retained/forgotten data.

How to run E2URec

Our method E2URec can be trained by

sh train_e2urec.sh

How to run baselines

We also provide shell scripts for baselines.

To run the Retrain baseline:

sh train_normal.sh

To run the SISA baseline:

sh train_sisa.sh

To run the NegGrad baseline:

sh train_ga.sh

To run the Bad-T baseline:

sh train_rl.sh

About

Official Code for paper "Towards Efficient and Effective Unlearning of Large Language Models for Recommendation" (Frontiers of Computer Science 2024)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published