Skip to content
Project code for "Active Embedding Search via Noisy Paired Comparisons" (ICML 2019)
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
enusvm Add enusvm baseline method Jun 9, 2019
make-embedding Add embedding creation scripts Jun 9, 2019
output-data
README.md Add enusvm baseline method Jun 9, 2019
active_search.py Adding embedding files, experiment script, embedding processing May 23, 2019
actrankq.py Adding embedding files, experiment script, embedding processing May 23, 2019
model.pkl Save Stan model, print error May 16, 2019
process_embedding.py Adding embedding files, experiment script, embedding processing May 23, 2019
run_experiments.py Adding embedding files, experiment script, embedding processing May 23, 2019

README.md

Active Embedding Search via Noisy Paired Comparisons

Project code for "Active Embedding Search via Noisy Paired Comparisons" (ICML 2019) by Gregory H. Canal, Andrew K. Massimino, Mark A. Davenport, Christopher J. Rozell.

Paper abstract

Suppose that we wish to estimate a user's preference vector w from paired comparisons of the form "does user w prefer item p or item q?," where both the user and items are embedded in a low-dimensional Euclidean space with distances that reflect user and item similarities. Such observations arise in numerous settings, including psychometrics and psychology experiments, search tasks, advertising, and recommender systems. In such tasks, queries can be extremely costly and subject to varying levels of response noise; thus, we aim to actively choose pairs that are most informative given the results of previous comparisons. We provide new theoretical insights into the benefits and challenges of greedy information maximization in this setting, and develop two novel strategies that maximize lower bounds on information gain and are simpler to analyze and compute respectively. We use simulated responses from a real-world dataset to validate our strategies through their similar performance to greedy information maximization, and their superior preference estimation over state-of-the-art selection methods as well as random queries.

Requirements

Packages (python)

Data

run_experiments.py uses our pre-processed embedding files contained in the sub-directory make-embedding/data/. It is not necessary to download the original dataset in this case.

To build an embedding using the scripts in make-embedding/ or to pre-process an embedding using process_embedding.py it is necessary to have the Food-10k dataset of triplets. This dataset may be obtained from the SE(3) Computer Vision Group at Cornell Tech.

Packages (MATLAB only for Enusvm/GaussCloud method baseline)

Code

  • active_search.py: module for proposed InfoGain, EPMV, and MCMV search methods.
  • actrankq.py: module for ActRankQ, our implementation of an active ranking baseline.
  • run_experiments.py: script to run paper experiments.
  • process_embedding.py: script to estimate noise constant and embedding scaling from embedding file
  • make-embedding/*.py: scripts to generate embedding from human intelligence task sourced triplets.
  • enusvm/*.m: implementation and simulation for the "GaussCloud" baseline method.

Please send correspondence to Greg Canal (gregory.canal@gatech.edu) and Andrew Massimino (massimino@gatech.edu)

You can’t perform that action at this time.