Skip to content

sisinflab/ECIR2023-Graph-CF

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Auditing Consumer- and Producer-Fairness in Graph Collaborative Filtering

This is the official GitHub repository for the paper: Auditing Consumer- and Producer-Fairness in Graph Collaborative Filtering, accepted as full paper at ECIR 2023.

This repository is heavily dependent on the framework Elliot, so we suggest you refer to the official GitHub page and documentation.

All graph models are implemented in PyTorch Geometric using the version 1.10.2, with CUDA 10.2 and cuDNN 8.0.

Installation guidelines: scenario #1

If you have the possibility to install CUDA on your workstation (i.e., 10.2), you may create the virtual environment with the requirements files we included in the repository, as follows:

# PYTORCH ENVIRONMENT (CUDA 10.2, cuDNN 8.0)

$ python3 -m venv venv
$ source venv/bin/activate
$ pip install --upgrade pip
$ pip install -r requirements.txt
$ pip install torch-scatter torch-sparse torch-cluster torch-spline-conv torch-geometric -f https://data.pyg.org/whl/torch-1.10.0+cu102.html

Installation guidelines: scenario #2

A more convenient way of running experiments is to instantiate a docker container having CUDA 10.2 already installed.

Make sure you have Docker and NVIDIA Container Toolkit installed on your machine (you may refer to this guide).

Then, you may use the following Docker image to instantiate the container equipped with CUDA 10.2:

Container Docker with CUDA 10.2 and cuDNN 8.0 (the environment for PyTorch): link

After the setup of your Docker containers, you may follow the exact same guidelines as scenario #1.

Datasets

At ./data/ you may find all tsv files for the datasets, i.e., training, validation, and test sets.

Training and testing models

To train and evaluate models an all considered metrics, you may run the following command:

$ python -u start_experiments.py --config <dataset_model>

where <dataset_model> refers to the name of the dataset and model to consider in the current experiment.

You may find all configutation files at ./config_files/<dataset_model>.yml, where all hyperparameter spaces and the exploration strategies are reported.

Results about calculated metrics are available in the folder ./results/<dataset_name>/performance/. Specifically, you need to access the tsv file having the following name pattern: rec_cutoff_<cutoff>_relthreshold_0_<datetime-experiment-end>.tsv.

Pareto calculation

If you want to calculate, for each metric pair (e.g., nDCG vs. APLT), the configuration points which belong to the Pareto frontier, and reproduce the results illustrated in the paper, you need to use the script pareto.py. Open the file, and modify the following lines for your convenience:

  • line 59: modify the path where all configurations for a specific model are reported, along with their own metric results (Elliot generates this file when the whole experimental flow is over, you may find it at ./results/<dataset>/performance/
  • lines 63-65: decide what to comment/uncomment based on the multi-objective trade-off you are considering

Once the script has been run and it is over, you will end up with a csv file indicating, for each nondominated point in the objective space, its coordinates.

About

Accepted as full paper at ECIR 2023

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages