Skip to content

pan-webis-de/bartelds19

master
Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Code
This branch is up to date with wietsedv/pan19-cross-domain-authorship-attribution:master.

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
 
 
 
 
 
 

Cross-domain Authorship Attribution

A support vector machine based approach for cross-domain authorship attribution. This application is specifically created for the PAN19 cross-domain authorship attribution shared task.

Requirements

  • Both Python 3.6 and Python 2.7 (Python 2.7 is needed for the dependency parser)
  • Python3 packages in python3-requirements.txt
  • Python2 packages in python2-requirements.txt
  • An official data set that is compiled for this shared task or at least in the same format

Installation

Before the application can be used, some external data is needed and the part-of-speech tagger and dependency parser have to be trained. To do this, run the following script:

./scripts/trainAll.sh

Warning: This will take a while.

Usage

Before the SVM model can be used for training and testing, the input data must be preprocessed once. This can be done by running the following script where the second path must be to an non-existing directory:

./scripts/runAll.sh path/to/training-dataset-2019-01-23 path/to/training-dataset-2019-01-23-processed

The preprocessing can also be triggered by adding the -r argument to the main script:

python3 svm.py -r data/training-dataset-2019-01-23 -i data/training-dataset-2019-01-23-processed -o outputs

If the input data is already processed you should run the script without the -r argument:

python3 svm.py -i data/training-dataset-2019-01-23-processed -o outputs

If you want to want to see macro-averaged precision, recall and f-scores during processing, add the --eval argument. The values are identical to the official shared task metrics:

python3 svm.py -i data/training-dataset-2019-01-23-processed -o outputs --eval

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.0%
  • Shell 1.0%