A dataset of crowdsourced judgements on whether utterances considered "pragmatic rejections" have acceptance or rejection force.
The data contains 60 examples of pragmatic rejections (as described in ) extracted manually from the BNC, AMI and Switchboard corpora [2,3,4], as well as 30 test items used in the crowdsourcing study.
The data is categorised according to the annotation scheme of .
For every datapoint, the results of the crowdsourcing experiment are given, where participants were asked to categorise the dialogue on a 4 point scale:
- B definitely meant to agree with A’s statement.
- B probably meant to agree with A’s statement.
- B definitely meant to disagree with A’s statement.
- B probably meant to disagree with A’s statement.
The data points at a tension between linguistic theory and naive interpretation in the crowdsourcing experiment. One point of interest is that the data is text-only, but that prosody might be very significant in these cases. This has been taken up in .
 Julian J. Schlöder and Raquel Fernández (2015). Pragmatic Rejection, Proceedings of the 11th International Conference on Computational Semantics (IWCS 2015).
 Burnard, L. (2000). Reference Guide for the British National Corpus (World Edition). Oxford University Computing Services.
 Carletta, J. (2007). Unleashing the killer corpus: experiences in creating the multi-everything AMI Meeting Corpus. Language Resources and Evaluation 41(2), 181-190.
 Godfrey, J. J., E. C. Holliman, and J. McDaniel (1992). SWITCHBOARD: Telephone Speech Corpus for Research and Development. In Proceedings of ICASSP'92.
 Julian J. Schlöder and Alex Lascarides (2015). Interpreting English Pitch Contours in Context, Proceedings of the 19th Workshop on the Semantics and Pragmatics of Dialogue (SemDial 2015, "goDial").