This repository contains the code used for the experiments in "It Is Different When Items Are Older: Debiasing Recommendations When Selection Bias and User Preferences Are Dynamic".
If you use this code to produce results for your scientific publication, or if you share a copy or fork, please refer to our WSDM 2022 paper:
@inproceedings{huang-2022-different,
author = {Huang, Jin and Oosterhuis, Harrie and de Rijke, Maarten},
booktitle = {WSDM 2022: The Fifteenth International Conference on Web Search and Data Mining},
date-added = {2021-11-14 21:03:42 +0100},
date-modified = {2021-11-14 21:03:42 +0100},
month = {February},
publisher = {ACM},
title = {It Is Different When Items Are Older: Debiasing Recommendations When Selection Bias and User Preferences are Dynamic},
year = {2022}}
You can install conda and then create Python 3.6 Conda environment. Run conda create -n Dancer python=3.6 to create the environment. Activate the environment by running conda activate Dancer. Then try to install the required packages:
$ pip install -r requirements.txt
We compare time-aware and time-unaware methods to answer three research questions.
Reproducing the results of methods - MF, TMF, TTF++, and TMTF -, in observation prediction can be done with the following commands with the best hyperparameters given for each methods:
$ cd examples
$ python run_exp.py --task OIPT --mode MF --setting random --lr 1e-5 --reg 1e-6
$ python run_exp.py --task OIPT --mode TMF_v --setting random --lr 1e-5 --reg 1e-7
$ python run_exp.py --task OIPT --mode TF --setting random --lr 1e-5 --reg 1e-7
$ python run_exp.py --task OIPT --mode TMTF --setting random --lr 1e-4 --reg 1e-6
$ cd examples
$ python run_exp.py --task OIPT --mode MF --setting time --lr 1e-4 --reg 1e-7
$ python run_exp.py --task OIPT --mode TMF_v --setting time --lr 1e-4 --reg 1e-7
$ python run_exp.py --task OIPT --mode TF --setting time --lr 1e-4 --reg 0
$ python run_exp.py --task OIPT --mode TMTF --setting time --lr 1e-4 --reg 1e-7
Reproducing the results of methods - MF, TMF, TTF++, and TMTF -, in predicting ratings can be done with the following commands with the best hyperparameters given for each methods:
$ cd examples
$ python run_exp.py --task OPPT --mode MF_v --setting naive --lr 1e-4 --reg 1e-4
$ python run_exp.py --task OPPT --mode TMF_v --setting naive --lr 1e-4 --reg 1e-4
$ python run_exp.py --task OPPT --mode TF --setting naive --lr 1e-4 --reg 1e-5
$ python run_exp.py --task OPPT --mode TMTF --setting naive --lr 1e-4 --reg 1e-4
$ cd examples
$ python run_exp.py --task OPPT --mode MF_v --setting ips --lr 1e-2 --reg 1e-4
$ python run_exp.py --task OPPT --mode TMF_v --setting ips --lr 1e-2 --reg 1e-3
$ python run_exp.py --task OPPT --mode TF --setting ips --lr 1e-4 --reg 1e-6
$ python run_exp.py --task OPPT --mode TMTF --setting ips --lr 1e-4 --reg 1e-6
Reproducing the results of methods - MF, TMF, MF-StaticIPS, TMF-StaticIPS, MF-DANCER, and TMF-DANCER -, in predicting ratings can be done with the following commands with the best hyperparameters given for each methods:
$ cd examples
$ python run_exp.py --task TART --mode MF_v --setting naive --lr 1e-2 --reg 1e-4
$ python run_exp.py --task TART --mode TMF_v --setting naive --lr 1e-2 --reg 1e-4
$ python run_exp.py --task TART --mode MF_v --setting StaticIps --lr 1e-2 --reg 1.0
$ python run_exp.py --task TART --mode TMF_v --setting StaticIps --lr 1e-3 --reg 1e-1
$ python run_exp.py --task TART --mode MF_v --setting DANCER --lr 1e-4 --reg 1e-3
$ python run_exp.py --task TART --mode TMF_v --setting DANCER --lr 1e-4 --reg 1e-3
The checkpoint will be stored in the folder ./checkpoint_dir/
which should be created before running the commands.
And the results will be displayed in the last few lines of the output with the corresponding metrics.
Moreover, results reported in the paper are the averages of 10 independent runs and can be reproduced with random seed 2012 ~ 2021 with seed setting e.g., --seed 2012
.