Skip to content

Latest commit

 

History

History
58 lines (43 loc) · 5.46 KB

README.md

File metadata and controls

58 lines (43 loc) · 5.46 KB

Model

Acknowledgement

  • We are using the implicit library for Weighted Matrix Factorization (WMF)
  • UserKNN, ItemKNN, RP3beta, and all their associated code are from the RecSys 2019 Deep Learning Evaluation GitHub repository
  • Mult-VAE is written in Tensorflow, by referencing the notebook provided by the paper's authors

Thank you!! :)

Running the Models

  1. Option 1 (Run using specific hyperparameter settings and/or your own grid search)

  2. Option 2 (Bayesian Hyperparameter Optimization, using scikit-optimize)

Experimental Results

  • The results are stored in the ./logs/ folder
    • Results for a particular dataset and model can be found in ./logs/{dataset}/{model}/
  • There are some helper files to 'sort' and 'gather' those results, i.e. sort_results.py, gather_results.py, and utilities_results.py
  • For example, if we consider the Recall @ 10 metric, there are 3 files in the ./logs/ folder:
    • ___results_summary___Rec_10.txt shows the model performance
    • ___results_summary___Rec_10__bar.png contains the bar plot
    • ___results_summary___Rec_10__table.png shows the relative performance of Model X (row) over Model Y (column)
      • E.g. for the Amazon (Electronics) dataset in Cluster 1, the value at (Row 1, Column 2) indicates the relative improvement of RP3beta over ItemKNN
      • Values in light green (also indicated with a *) are statistically significant with a p-value < 0.05
      • Values in dark green (also indicated with a **) are statistically significant with a p-value < 0.01

Experimental Results (Recall @ 10)

Recall @ 10

Experimental Results (nDCG @ 10)

nDCG @ 10