Welcome to this repository! We are happy that you are interested in the experimental code that we used.
In this repo, you find the oTree code that was used to collect the data for the paper:
📃 Blaufus, K., Piehl, K., & Schröder, M. (2025). Input Versus Output Incentives in Idea Generation — An Experimental Analysis.
👉 Find the working paper version at SSRN
This code is an adapted version of the code provided by:
📃 Laske, K., Römer, N. & Schröder, M (2024). Piece-rate incentives and idea generation - An experimental analysis. Mimeo.
👉 Find the original repository provided by Nathalie Römer
This project uses the Word Illustration Task (WIT) and varies the incentives provided to ideators. The task is to come up with a word that one wants to illustrate and to find ways to do so. Ideators were instructed to create as many innovative ideas as possible. Ideas are defined as innovative if they are original and of high quality.
See this file for the instructions shown to ideators.
We assessed originality by comparing each idea to a reference set of 300 ideas generated by different ideators in a prior experiment using a similar procedure. An idea was considered original if the illustrated word did not appear among the words illustrated in this reference set. Otherwise, it was classified as not original.
We assessed quality through a survey among stylized customers. These customers received a monetary payoff for correctly indetifying the exact illustrated word. An idea was classified as high quality if at least half of the customers correctly identified the illustrated word. Otherwise, it was classfied as low quality.
This is the main part of the experiment. It employs WIT, provides the corresponding instructions to ideators and delivers a follow-up questionnaire.
This part can be used by the experimentator to manually check the illustrations.
Please note that the following file is necessary to run the code. This repository does not include the collected data: checking_illustrations.csv
Note that ideators were not allowed to, e.g., write words instead of illustrating them. These illustrations were excluded from the customer rating and the subsequent analyses.
This is the separate customer rating used to assess the quality of ideas generated in the main experiment.
Please note that the following file is necessary to run the code. This repository does not include the collected data: raters_recognizability.csv
📃 Blaufus, K., Piehl, K., & Schröder, M. (2025). Input Versus Output Incentives in Idea Generation — An Experimental Analysis.
👉 Working paper
👉 Replication package
👉 Preregistration
👉 Ethical approval
📃 Laske, K., Römer, N. & Schröder, M (2024). Piece-rate incentives and idea generation - An experimental analysis. Mimeo.
👉 WIT Git Repository
📃 Chen, D. L., Schonger, M., and Wickens, C. (2016). oTree—An Open-Source Platform for Laboratory, Online, and Field Experiments. Journal of Behavioral and Experimental Finance, 9:88–97.
👉 oTree Documentation