The CrowdTruth framework captures human semantics through a pipeline of three processes:
a) combining various machine processing of text, image and video in order to understand better the input content and optimise its suitability for micro-tasks, thus optimise the time and cost of the crowdsourcing process;
b) providing reusable human-computing task templates to collect the maximum diversity in the human interpretation,
thus collect richer human semantics; and c) implementing ’disagreement metrics’, i.e. CrowdTruth metrics, to support deep analysis of the quality and semantics of the crowdsourcing data. Instead of the traditional inter-annotator agreement, we use their disagreement as a useful signal to evaluate the data quality, ambiguity, and vagueness. In this paper we demonstrate the innovative CrowdTruth approaches embodied in the software to:
-
support processing of different text, image and video data;
-
support a variety of annotation tasks;
-
harness worker disagreement with CrowdTruth metrics;
and 4) provide an interface to support data analysis and visualisation.
In previous work we introduced the CrowdTruth methodology with examples for semantic interpretation of medical text for relation and factor extraction, and with newspaper text for event extraction. In this paper, we demonstrate the applicability and robustness of the approach to a wide variety of problems across a number of domains. We also show the advantages of using open standards and the extensibility of the framework with new data modalities and annotation tasks, as well as its openness to external services.