Censored tweets annotated for specificity
Branch: master
Clone or download
Latest commit f1ec29d Feb 4, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
data Cleaned data Dec 17, 2018
model Add files via upload Jan 10, 2019
output upload model Jan 1, 2019
README.md
createFeatures.py fix the indentation and format of codes Jan 1, 2019
extractPostag.sh upload model Jan 1, 2019
features.py fix the indentation and format of codes Jan 1, 2019
input.txt upload model Jan 1, 2019
slides.pdf
specificity.py rename parser.py to specificity.py Jan 2, 2019
utils.py Update utils.py Jan 1, 2019

README.md

SpecificityTwitter is a tool to predict sentence specificity of social media posts.

The dataset and models in this package are obtained using co-training as described in , AAAI 2019.

Citation and contact

Please cite the AAAI-19 paper: Gao et al., Predicting and Analyzing Language Specificity in Social Media Posts

@InProceedings{gao2019specificity,
  author    = {Gao, Yifan  and  Zhong, Yang  and  Preo\c{t}iuc-Pietro, Daniel  and  Li, Junyi Jessy},
  title     = {Predicting and Analyzing Language Specificity in Social Media Posts},
  booktitle = {Proceedings of AAAI},
  year      = {2019},
  url       = {https://www.aaai.org/Papers/AAAI/2019/AAAI-GaoYifan.7009.pdf},
}

Dependencies

SpecificityTwitter is implemented using Python 3.6+. It depends on the following packages:

Our model was trained on a support vector regression model intergraded with scikit-learn. The last three packages together with the StanfordCoreNLP toolkit are required to generate features to be used in prediction.

Data and resources

Word lexicons for the models are available for download here. Please note that these resources come with license(s). Decompress the tar ball under the model directory.

Resources

There are several files in the resource folder.

  • Brown clusters (Turian et al., 2010)

    browncluster.txt

  • Concrete level (Brysbaert wt al., 2014)

    concrete.csv

  • GloVe Word Embedding trained on twitter posts (Pennington et al., 2014)

    glove.twitter.27B.100d.txt

  • Sentiment words from (Hu and Liu, 2004)

    negatie-words.txt

    positive-words.txt

  • Stanford NER tagger (Finkel et al., 2005)

    stanford-ner.jar

    english.muc.7class.distsim.crf.ser.gz

Running SpecificityTwitter

Call:

$ python specificity.py --inputfile inputfile --outputfile predfile
  • <inputfile> should consists of word-tokenized sentences, one sentence per line;
  • <predfile> will be the destination file which SpecicifityTwitter will write the specificity scores to, one score per line in the same order as sentences in <inputfile>.

The scores are decimal numbers ranging from 1 to 5, with 1.0 being most general and 5.0 being most specific.

Practical notes

  • Sentences must be word-tokenized before fed into this model.

  • Note that the word embedding file is a 1.2 GB file and should be downloaded from the above link. Each run of specificity.py will load the file to generate features. Thus it is best to avoid loading it multiple times, or modify feature.py and tailor it for your data loading purpose.