Skip to content
High quality dataset for the task of Sarcasm Detection
Branch: master
Clone or download
Latest commit b694d60 Sep 7, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information. Update Sep 7, 2019
Sarcasm_Headlines_Dataset.json Update dataset Jul 3, 2019
wordcloud_non_sarcastic.png update wordcloud Jun 9, 2018
wordcloud_sarcastic.png resize images Jun 9, 2018


Past studies in Sarcasm Detection mostly make use of Twitter datasets collected using hashtag based supervision but such datasets are noisy in terms of labels and language. Furthermore, many tweets are replies to other tweets and detecting sarcasm in these requires the availability of contextual tweets.

To overcome the limitations related to noise in Twitter datasets, this Headlines dataset for Sarcasm Detection is collected from two news website. TheOnion aims at producing sarcastic versions of current events and we collected all the headlines from News in Brief and News in Photos categories (which are sarcastic). We collect real (and non-sarcastic) news headlines from HuffPost.

This new dataset has following advantages over the existing Twitter datasets:

  • Since news headlines are written by professionals in a formal manner, there are no spelling mistakes and informal usage. This reduces the sparsity and also increases the chance of finding pre-trained embeddings.
  • Furthermore, since the sole purpose of TheOnion is to publish sarcastic news, we get high-quality labels with much less noise as compared to Twitter datasets.
  • Unlike tweets which are replies to other tweets, the news headlines we obtained are self-contained. This would help us in teasing apart the real sarcastic elements.


Each record consists of three attributes:

  • is_sarcastic: 1 if the record is sarcastic otherwise 0

  • headline: the headline of the news article

  • article_link: link to the original news article. Useful in collecting supplementary data

Reading the data

In python, data can be read using the following function:

def parseJson(fname):
    for line in open(fname, 'r'):
        yield eval(line)

Example usecase: data = list(parseJson('./Sarcasm_Headlines_Dataset.json'))


The general statistics of this dataset along with high-quality Twitter dataset (in terms of label only) provided by Semeval challenge are given in the following table.

Statistic/Dataset Headlines Semeval
# Records 28,619 3,000
# Sarcastic records 13,635 2,396
# Non-sarcastic records 14,984 604
% of pre-trained word embeddings not available 23.35 35.53

We can notice that for Headlines dataset, where the text is much more formal in language, the percentage of words not available in word2vec vocabulary is much less than Semeval dataset.


As a basic exploration, following figures visualize the word clouds through which we can see the types of words that occur frequently in each category.

Sarcastic Headlines

Wordcloud of Sarcastic Headlines

Non-sarcastic Headlines

Wordcloud of Sarcastic Headlines


This dataset was collected from TheOnoin and HuffPost.


Please cite the following if you use the data using this link:

Sarcasm Detection using Hybrid Neural Network
Rishabh Misra, Prahal Arora
Arxiv, August 2019
You can’t perform that action at this time.