Skip to content

Data and code for the paper "Did AI get more negative recently?"

License

Notifications You must be signed in to change notification settings

DominikBeese/DidAIGetMoreNegativeRecently

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Did AI get more negative recently?

Data and code for the paper "Did AI get more negative recently?" by Dominik Beese, Begüm Altunbaş, Görkem Güzeler, and Steffen Eger, RSOS 2023.

average stance per year and domain

Content

The repository contains the following elements:

Citation

@article{DidAIGetMoreNegativeRecently,
          title = "Did {AI} get more negative recently?",
         author = "Dominik Beese and Beg{\"u}m Altunba{\c{s}} and G{\"o}rkem G{\"u}zeler and Steffen Eger",
        journal = "Royal Society Open Science",
         volume = "10",
         number = "3",
          pages = "221159",
           year = "2023",
            doi = "10.1098/rsos.221159",
            URL = "https://royalsocietypublishing.org/doi/abs/10.1098/rsos.221159",
         eprint = "https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.221159",
      publisher = "The Royal Society Publishing",
}

Abstract: In this paper, we classify scientific articles in the domain of natural language processing (NLP) and machine learning (ML), as core subfields of artificial intelligence (AI), into whether (i) they extend the current state-of-the-art by the introduction of novel techniques which beat existing models or whether (ii) they mainly criticize the existing state-of-the-art, i.e. that it is deficient with respect to some property (e.g. wrong evaluation, wrong datasets, misleading task specification). We refer to contributions under (i) as having a ‘positive stance’ and contributions under (ii) as having a ‘negative stance’ (to related work). We annotate over 1.5 k papers from NLP and ML to train a SciBERT-based model to automatically predict the stance of a paper based on its title and abstract. We then analyse large-scale trends on over 41 k papers from the last approximately 35 years in NLP and ML, finding that papers have become substantially more positive over time, but negative papers also got more negative and we observe considerably more negative papers in recent years. Negative papers are also more influential in terms of citations they receive.