Skip to content
Code and data for EMNLP2016 article "What makes a convincing argument? Empirical analysis and detecting attributes of convincingness in Web argumentation" by Ivan Habernal and Iryna Gurevych
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.

What makes a convincing argument? Empirical analysis and detecting attributes of convincingness in Web argumentation

Source code, data, and supplementary materials for our EMNLP 2016 article. Please use the following citation:

  title     = {What makes a convincing argument? Empirical analysis
               and detecting attributes of convincingness in Web argumentation},
  author    = {Habernal, Ivan and Gurevych, Iryna},
  booktitle = {Proceedings of the 2016 Conference on Empirical Methods
               in Natural Language Processing},
  year      = {2016},
  pages     = {1214--1223},
  publisher = {Association for Computational Linguistics},
  address   = {Austin, Texas},
  url       = {}

Abstract: This article tackles a new challenging task in computational argumentation. Given a pair of two arguments to a certain controversial topic, we aim to directly assess qualitative properties of the arguments in order to explain why one argument is more convincing than the other one. We approach this task in a fully empirical manner by annotating 26k explanations written in natural language. These explanations describe convincingness of arguments in the given argument pair, such as their strengths or flaws. We create a new crowd-sourced corpus containing 9,111 argument pairs, multi-labeled with 17 classes, which was cleaned and curated by employing several strict quality measures. We propose two tasks on this data set, namely (1) predicting the full label distribution and (2) classifying types of flaws in less convincing arguments. Our experiments with feature-rich SVM learners and Bidirectional LSTM neural networks with convolution and attention mechanism reveal that such a novel fine-grained analysis of Web argument convincingness is a very challenging task. We release the new UKPConvArg2 corpus and software under permissive licenses to the research community.

Drop me a line or report an issue if something is broken (and shouldn't be) or if you have any questions.

For license information, see LICENSE files in code/*/ and NOTICE.txt.

This repository contains experimental software and is published for the sole purpose of giving additional background details on the respective publication.

Project structure

  • code — experimental code
  • data — UKPConvArg2 corpus and data for experiments

Data format

The UKPConvArg2 corpus is stored in 32 xml files and is based on the UKPConvArg1 corpus (see our ACL 2016 paper ). The data are licensed under CC-BY (Creative Commons Attribution 4.0 International License).

The source arguments originate from

Creative Commons License

Data formats


Here is an excerpt from the is-the-school-uniform-a-good-or-bad-idea-_bad.xml file; most of it is self-explanatory.

<?xml version="1.0"?>
      <text>I truly believe that wearing uniform is bad, because it ...</text>
      <originalHTML>&lt;p&gt;I truly believe that wearing uniform is bad, because it ...</originalHTML>
      <text>I think it is bad to wear school uniform because it ...</text>
      <originalHTML>&lt;p&gt;I think it is bad to wear school uniform because it ...</originalHTML>
      <title>Is the school uniform a good or bad idea?</title>
      <mTurkAssignmentWithReasonUnits>        <!-- this is taken from the UKPConvArg1 corpus -->
        <assignmentAcceptTime>2016-02-12 04:08:41.0 UTC</assignmentAcceptTime>
        <assignmentSubmitTime>2016-02-12 04:11:44.0 UTC</assignmentSubmitTime>
        <reason>a1is well written and intelligent</reason>
        <reasonUnits>                         <!-- these are the new annotations -->
            <reasonUnitText>a1is well written and intelligent</reasonUnitText>
            <!-- this was shown to the crowd-workers -->
            <textForAnnotation>Argument Xis well written and intelligent</textForAnnotation>
            <assignments>    <!-- we have 5 assignments -->
                <assignmentAcceptTime>2016-05-23 19:14:44.0 UTC</assignmentAcceptTime>
                <assignmentSubmitTime>2016-05-23 19:16:04.0 UTC</assignmentSubmitTime>
                <assignmentAcceptTime>2016-05-23 19:23:25.0 UTC</assignmentAcceptTime>
                <assignmentSubmitTime>2016-05-23 19:25:40.0 UTC</assignmentSubmitTime>
                <assignmentAcceptTime>2016-05-23 15:28:42.0 UTC</assignmentAcceptTime>
                <assignmentSubmitTime>2016-05-23 15:29:48.0 UTC</assignmentSubmitTime>
                <assignmentAcceptTime>2016-05-23 21:07:50.0 UTC</assignmentAcceptTime>
                <assignmentSubmitTime>2016-05-23 21:08:53.0 UTC</assignmentSubmitTime>
                <assignmentAcceptTime>2016-05-23 19:24:15.0 UTC</assignmentAcceptTime>
                <assignmentSubmitTime>2016-05-23 19:25:18.0 UTC</assignmentSubmitTime>
            <estimatedGoldLabel>o9_1</estimatedGoldLabel>    <!-- this is the estimated gold label -->
            <!-- see below explanation of these labels -->
            <!-- some were ignored, some had duplicit text and thus not annotated,
            some were filtered out in previous pre-processing phases -->


The CSV files are generated from the XML files, here is an excerpt from is-the-school-uniform-a-good-or-bad-idea-_bad.xml.csv

arg198954_arg236737	o8_1,o9_1	I truly believe that ...	I think it is bad to wear ...
arg203444_arg251309	o8_1,o9_1,o5_1,o6_3,o7_3	The school my mother works at, plus the school district ..	Their gay! Actually this ..
  • Each line is then a single argument pair, tab-separated
    • Pair ID (firstArgumentID_secondArgumentID)
    • Comma-delimited set of gold labels as presented in the article in Figure 1
    • The more convincing argument
    • The less convincing argument
      • Line breaks are encoded as <br/>

Label explanation

The labels oA_B correspond to the following description (note that we a bit different notation in the article: CA-B)

  • o8_1 Argument X has more details, information, facts, or examples / more reasons / better reasoning / goes deeper / is more specific
  • o8_4 Argument X is balanced, objective, discusses several viewpoints / well-rounded / tackles flaws in opposing views
  • o8_5 Argument X has better credibility / reliability / confidence
  • o8_6 Explanation is highly topic-specific and addresses the content of Argument X in detail
  • o9_1 Argument X is clear, crisp, to the point / well written
  • o9_2 Argument X sticks to the topic
  • o9_3 Argument X has provoking question / makes you think
  • o9_4 Argument X is well thought of / has smart remarks / higher complexity
  • o5_1 Argument X is attacking opponent / abusive
  • o5_2 Argument X has language issues / bad grammar / uses humor, jokes, or sarcasm
  • o5_3 Argument X is unclear, hard to follow
  • o6_1 Argument X provides no facts / not enough support / not credible evidence / no clear explanation
  • o6_2 Argument X has no reasoning / less or insufficient reasoning
  • o6_3 Argument X uses irrelevant reasons / irrelevant information
  • o7_1 Argument X is not an argument / is only opinion / is rant
  • o7_2 Argument X is non-sense / has no logical sense / confusing
  • o7_3 Argument X is off topic / doesn't address the issue
  • o7_4 Argument X is generally weak / vague

Labels o5_*, o6_* and o7_* are always attached to the less convincing argument, while labels o8_* and o9_* to the more convincing argument.


  • Java 1.7 and higher, Maven (for Java-based experiments)
  • Python 2.7 and virtualenv (for Python-based experiments)
    • GPU is recommended but not required
  • Tested on 64-bit Linux versions


Deep-learning experiments

Installing Python dependencies:

  • virtualenv
user@ubuntu:~/emnlp2016-empirical-convincingness$ cd code/src/main/python/
user@ubuntu:~/emnlp2016-empirical-convincingness/code/src/main/python$ virtualenv env
Running virtualenv with interpreter /usr/bin/python2
New python executable in /home/user/emnlp2016-empirical-convincingness/code/src/main/python/env/bin/python2
Also creating executable in /home/user/emnlp2016-empirical-convincingness/code/src/main/python/env/bin/python
Installing setuptools, pkg_resources, pip, wheel...done.
  • requirements
user@ubuntu:~/emnlp2016-empirical-convincingness/code/src/main/python$ source env/bin/activate
(env) user@ubuntu:~/emnlp2016-empirical-convincingness/code/src/main/python$ pip install -r requirements.txt
 Successfully installed Keras-1.0.3 PyYAML-3.11 Theano-0.8.2 nltk-3.1 nose-1.3.7 numpy-1.10.2 scikit-learn-0.17.1 scipy-0.16.1 six-1.10.0

Adjust path in

folds, word_index_to_embeddings_map = load_my_data("~/data2/convincingness-emnlp/step14-gold-csv/")

so it points to data/CSV-Format

  • Run THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32,optimizer_including=cudnn, adjust line 175 to get the right model (MLP, LSTM, ATT+LSTM)
  • Similarly, run for the second experiment

Java-based experiments

  • Install LIBSVM ( version used: Version 3.21, Dec 2015)
    • Add svm-train and svm-predict to /usr/local/bin/
    • Alternatively, adjust the path constant in SVMLibExperimentRunner

Compile the Java project

$ cd code/
$ mvn package
You can’t perform that action at this time.