Skip to content

This repository contains the implementation of evaluation metrics for recommendation systems. We have compared similarity, candidate generation, rating, ranking metrics performance on 5 different datasets - MovieLens 100k, MovieLens 1m, MovieLens 10m, Amazon Electronics Dataset and Amazon Movies and TV Dataset.

Notifications You must be signed in to change notification settings

aryan-jadon/Evaluation-Metrics-for-Recommendation-Systems

Repository files navigation

Evaluation Metrics for Recommendation Systems

DOI

This repository contains the implementation of evaluation metrics for recommendation systems.
We have compared similarity, candidate generation, rating, ranking metrics performance on 5 different datasets - 
MovieLens 100k, MovieLens 1m, MovieLens 10m, Amazon Electronics Dataset and Amazon Movies and TV Dataset.
Summary of experiment with instructions on how to replicate this experiment can be find below.

About Recommendations Models

Majority of this repository work is taken from - https://github.com/recommenders-team/recommenders

Experiments Summary and Our Paper

Cite Our Paper

@misc{jadon2023comprehensive,
      title={A Comprehensive Survey of Evaluation Techniques for Recommendation Systems}, 
      author={Aryan Jadon and Avinash Patil},
      year={2023},
      eprint={2312.16015},
      archivePrefix={arXiv},
      primaryClass={cs.IR}
}

Summary of Experiments

Similarity Metrics

similarity_metrics.png

Candidate Generation Metrics

candidate_generation_metrics.png

Rating Metrics

rating_metrics.png

Ranking Metrics

ranking_metrics.png

Replicating this Repository and Experiments

  • recommenders: Folder containing the recommendations algorithms implementations.
  • similarity_metrics: Folder containing scripts for running experiments of similarity metrics.
  • candidate_generation_metrics: Folder containing scripts for running experiments of candidate generations metrics.
  • rating_metrics: Folder containing scripts for running experiments of rating metrics.
  • ranking_metrics: Folder containing scripts for running experiments of ranking metrics.

Creating Environment

Install the dependencies using requirements.txt

pip install -r requirements.txt

or

conda env create -f environment.yml

Similarity Metrics Experiments

Run the Similarity Metrics Experiments using -

chmod +x run_similarity_metrics_experiments.sh
./run_similarity_metrics_experiments.sh

Candidate Generation Metrics Experiments

Run the Candidate Generation Metrics Experiments using -

chmod +x run_candidate_generation_metrics_experiments.sh
./run_candidate_generation_metrics_experiments.sh

Rating Metrics Experiments

Run the Rating Metrics Experiments using -

chmod +x run_rating_metrics_experiments.sh
./run_rating_metrics_experiments.sh

Ranking Metrics Experiments

Run the Ranking Metrics Experiments using -

chmod +x run_ranking_metrics_experiments.sh
./run_ranking_metrics_experiments.sh

About

This repository contains the implementation of evaluation metrics for recommendation systems. We have compared similarity, candidate generation, rating, ranking metrics performance on 5 different datasets - MovieLens 100k, MovieLens 1m, MovieLens 10m, Amazon Electronics Dataset and Amazon Movies and TV Dataset.

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published