COLING 2022 Oral Paper aclanthology
@inproceedings{liu2022prec,
title = "Boosting Deep {CTR} Prediction with a Plug-and-Play Pre-trainer for News Recommendation",
author = "Liu, Qijiong and
Zhu, Jieming and
Dai, Quanyu and
Wu, Xiaoming",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2022.coling-1.249",
pages = "2823--2833"
}
We use open-source tools UniTok for tokenization.
python build_dataset.py
python worker.py --config config/MINDsmall-3L12H768D.yaml --exp exp/mind-news.yaml
python worker.py --config config/MINDsmall-3L12H768D.yaml --exp exp/export-news.yaml
python worker.py --config config/MINDsmall-user-6L12H768D.yaml --exp exp/mind-user.yaml
python worker.py --config config/MINDsmall-user-6L12H768D.yaml --exp exp/export-user.yaml
We apply open-source benchmark tool FuxiCTR for CTR prediction