Evaluation Strategies for HCI Toolkit Research
David Ledo*, Steven Houben*, Jo Vermeulen*, Nicolai Marquardt, Lora Oehlberg and Saul Greenberg.
* Authors contributed equally to the work
Link to Original publication
Toolkit research plays an important role in the field of HCI, as it can heavily influence both the design and implementation of interactive systems. For publication, the HCI community typically expects toolkit research to include an evaluation component. The problem is that toolkit evaluation is challenging, as it is often unclear what ‘evaluating’ a toolkit means and what methods are appropriate. To address this problem, we analyzed 68 published toolkit papers. From our analysis, we provide an overview of, reflection on, and discussion of evaluation methods for toolkit contributions. We identify and discuss the value of four toolkit evaluation strategies, including the associated techniques that each employs. We offer a categorization of evaluation strategies for toolkit researchers, along with a discussion of the value, potential limitations, and trade-offs associated with each strategy.
In this repository you will find:
- Copy of the paper
- Original raw data
- Raw figures from our publication
- Presentation material
We Hope to Add
- Additional toolkits provided by other researchers
- Summary visualization of raw data
- Additional scripts
Notes About the Raw Data
Our data includes additional interpretation of contribution statements (G1 to G5), as stated in our paper. We would be happy to revise it if you are one of the authors and feel our interpretation is correct or not.
Want to Add Your Toolkit?
If you would like to add your toolkit, send the authors a csv row in the same format as our raw data, and we will include it in the additional toolkits data.