RDRec (ACL 2024 Main, short paper) arXiv
get the License from [the site](https://llama.meta.com/llama-downloads/)
>> cd llama
>> ./download.sh (License required)
>> pip install -e .
>> torchrun --nproc_per_node 1 example_chat_completion.py \
--ckpt_dir llama-2-7b-chat/ \
--tokenizer_path tokenizer.model \
--max_seq_len 512 --max_batch_size 6
>> torchrun --nproc_per_node 1 data/{dataset}/distillation_{dataset}.py \
--ckpt_dir llama/llama-2-7b-chat/ \
--tokenizer_path llama/tokenizer.model \
--max_seq_len 512 --max_batch_size 6
>> pip install -r requirement.txt
>> python pretrain.py ./data/{dataset}/ --cuda --batch_size 64 --checkpoint ./checkpoint/{dataset}/
>> python seq.py ./data/{dataset}/ --cuda --batch_size 32 --checkpoint ./checkpoint/{dataset}/
>> python topn.py ./data/{dataset}/ --cuda --batch_size 32 --checkpoint ./checkpoint/{dataset}/
>> python exp.py ./data/{dataset}/ --cuda --batch_size 32 --checkpoint ./checkpoint/{dataset}/
- There are some fluctuations in results by RDRec for sequential recommendations. We reported average results in 10-trial runs in the paper (See t_test.py for more details). If the results are not ideal, please pre-train the model once again.
- If you have any questions, please feel free to contact me at kaysenn@163.com.
If this repository helps you, please cite:
@inproceedings{wang2024rdrec,
title={RDRec: Rationale Distillation for LLM-based Recommendation},
author={Wang, Xinfeng and Cui, Jin and Suzuki, Yoshimi and Fukumoto, Fumiyo},
booktitle={Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)},
year={2024}
}