A tool for extraction of lexical features from text based on UIMA and MapReduce
Java Shell
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
src/main
.gitignore
.travis.yml
CoNLL.sh
ExtractTermFeaturesScores.sh
LICENSE.txt
README.md
SplitSentences.sh
pom.xml

README.md

Build Status Release

lefex: A Tool for LExical FEature eXtraction

This project contains Hadoop jobs for extraction of features of words and texts. Currently, the following types of features can be extracted:

  1. CoNLL. Given a set of HTML documents in the CSV format url<TAB>s3-path<TAB>html-document and outputs the dependency parsed documents in the CoNLL format. See the de.uhh.lt.lefex.CoNLL.HadoopMain class.
  2. ExtractTermFeatureScores. Given a corpus in plain text format, extract word count (word<TAB>count), feature count (feature<TAB>count), and word-feature count (word<TAB>feature<TAB>count) and save these into CSV files. This job is used for feature extraction in the JoSimText project: the computation of distributional thesaurus can be performed taking as input the output of this job. See the de.uhh.lt.lefex.ExtractTermFeatureScores.HadoopMain class.
  3. ExtractLexicalSampleFeatureScores. Given a lexical sample dataset for word sense disambiguation in CSV format, extract features of the target word in context and add them as an extra column. Currently, the system supports extraction of three types of features of a target word: co-occurrences, dependency features, and trigrams. See the de.uhh.lt.lefex.ExtractLexicalSampleFeatures.HadoopMain class.
  4. SentenceSplitter. This job take a plain text corpus as an input and outputs a file with exactly one sentence per line. See the de.uhh.lt.lefex.SentenceSplitter.HadoopMain class.