No description, website, or topics provided.
Switch branches/tags
Nothing to show
Clone or download
Jean Garcia-Gathright
Jean Garcia-Gathright Fix link
Latest commit 9b69451 Oct 4, 2018

README.md

Abstract

Evaluation is a fundamental aspect of designing and experimenting on recommender systems. Evaluation typically takes one of three forms: (1) smaller lab studies with real users; (2) batch tests with offline collections, judgements, and measures; (3) large-scale controlled experiments (e.g. A/B tests) looking at implicit feedback. But it is rare for the first to inform and influence the latter two; in particular, implicit feedback metrics often have to be continuously revised and updated as assumptions are found to be poorly supported.

Mixed methods research provides an opportunity to develop robust implicit metrics by combining strengths of both qualitative and quantitative approaches and exploring a research area from multiple perspectives. In this tutorial, we will show how qualitative research on user behavior provides insight on the relationship between implicit signals and satisfaction. These insights can inform and augment quantitative modeling and analysis for online and offline metrics and evaluation.

ACM Reference Format:

Jean Garcia-Gathright, Christine Hosey, Brian St. Thomas, Ben Carterette, and Fernando Diaz. 2018. Mixed Methods for Evaluating User Satisfaction: A 3/4-day tutorial. In Twelfth ACM Conference on Recommender Systems (RecSys ’18), October 2–7, 2018, Vancouver, BC, Canada. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3240323.3241622

Slides

Part 1: Overview of mixed methods, qualitative and quantitative data collection

Part 2: Qualitative and quantitative data analysis, best practices for mixed methods teams

Part 3: Hypothesis testing