Skip to content


Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

Huntag - a sequential tagger for NLP combining the linear classificator Liblinear and Hidden Markov Models Based on training data, Huntag can perform any kind of sequential sentence tagging and has been used for NP chunking and Named Entity Recognition.

#Requirements HunTag uses the Liblinear package, which can be downloaded from:

In order for HunTag to work, Liblinear should be compiled with python bindings and the directory containing the python files' and' should be added to the environment variable PYTHONPATH.

IMPORTANT: after installing Liblinear, the python bindings must be patched by cd-ing to the python subdirectory of your liblinear installation and running patch < (path-to-HunTag)/liblinear.patch This allows liblinear to handle the more memory-efficient cType input used by HunTag

#Pre-trained models Pre-trained models for Hungarian NP-chunking and NER are available from the HunTag webpage

#Data format

Input data must be a tab-separated file with one word per line and an empty line to mark sentence boundaries. Each line must contain the same number of fields and the last field must contain the correct tag for the word, which may be in the BI format used at CoNLL shared tasks (e. g. B-NP to mark the first word of a noun phrase, I-NP to mark the rest and O to mark words outside an NP) or in the so-called BIE1 format which has a seperate symbol for words constituting a chunk themselves (1-NP) and one for the last words of multi-word phrases (E-NP). The first two characters of answer tags should always conform to one of these two conventions, the rest may be any string describing the category.


The flexibility of Huntag comes from the fact that it will generate any kind of features from the input data given the appropriate python functions. Several dozens of features used regularly in NLP tasks are already implemented in the file, however the user is encouraged to add any number of her own.

Once the desired features are implemented, a data set and a configuration file containing the list of feature functions to be used are all Huntag needs to perform training and tagging.

#Config file The configuration file lists the features that are to be used for a given task. The feature file may start with a command specifying the default radius for features. This is non-mandatory. Example: !defaultRadius 5

Next, it can give values to variables that shall be used by the featurizing methods. For example, the following three lines set the parameters of the feature called krpatt

let krpatt minLength 2 let krpatt maxLength 99 let krpatt lang hu

The second field specifies the name of the feature, the third a key, the fourth a numeric value. The dictionary of key-value pairs will be passed to the feature.

After this come the actual assignments of feture names to features. Examples:

token ngr ngrams 0 sentence bwsamecases isBetweenSameCases 1 lex street hunner/lex/streetname.lex 0 token lemmalowered lemmaLowered 0,2

The first keyword can have three values, token, lex and sentence. For example, in the first example line above, the feature name ngr will be assigned to the python function ngrams() that returns a feature string for the given token. The third argument is a column or comma-separated list of columns. It specifies which fields of the input should be passed to the feature function. Counting starts from zero.

For sentence features, the input is aggregated sentence-wise into a list, and this list is then passed to the feature function. This function should return a list consisting of one feature string for each of the tokens of the sentence.

For lex features, the second argument specifies a lexicon file rather than a python function name. The specified token field is matched against this lexicon file.

#Usage HunTag may be run in any of the following three modes:

##train used to train a Liblinear model given a training corpus and a set of feature functions. When run in TRAIN mode, HunTag creates three files, one containing the liblinear mode and two listing features and labels and the integers they are mapped to when passed to liblinear. With the --model option set to NAME, the three files will be stored under NAME.model, NAME.featureNumbers and NAME.labelNumbers respectively.

cat TRAINING_DATA | python train OPTIONS

Mandatory options: -c FILE, --config-file=FILE read feature configuration from FILE -m NAME, --model=NAME name of liblinear model and lists -p PARAMS --parameters=PARAMS pass PARAMS to liblinear trainer

Non-mandatory options:
-f FILE, --feature-file=FILE write training events to FILE

##bigram-train Used to train a bigram language model using a given field of the training data

cat TRAINING_DATA | python bigram-train OPTIONS

Mandatory options: -b FILE, --bigram-model=FILE name of bigram model file to be written -t FIELD, --tag-field=FIELD specify FIELD containing the tags to build bigram

##tag Used to tag input. Given a maxent model providing the value P(t|w) for all tags t and words (set of feature values) w, and a bigram language model supplying P(t|t0) for all pairs of tags, HunTag will assign to each sentence the most likely tag sequence.

cat INPUT | python tag OPTIONS

Mandatory options: -m NAME, --model=NAME name of liblinear model file and lists -b FILE, --bigram-model=FILE name of bigram model file -c FILE, --config-file=FILE read feature configuration from FILE

Non-mandatory options: -l L, --language-model-weight=L set weight of the language model to L (default is 1)


Huntag was created by Gábor Recski and Dániel Varga. It is a reimplementation and generalization of a Named Entity Recognizer built by Dániel Varga and Eszter Simon.


Huntag is made available under the GNU Lesser General Public License v3.0. If you received Huntag in a package that also contain the Hungarian training corpora for Named Entity Recoginition and chunking task, then please note that these corpora are derivative works based on the Szeged Treebank, and they are made available under the same restrictions that apply to the original Szeged Treebank


If you use the tool, please cite the following paper:

Gábor Recski, Dániel Varga (2009): A Hungarian NP-chunker In: The Odd Yearbook. ELTE SEAS Undergraduate Papers in Linguistics. Budapest: ELTE School of English and American Studies. pp. 87-93

   author={Recski, G\'abor and D\'aniel Varga},
   title={{A Hungarian NP Chunker}},
   journal = {The Odd Yearbook. ELTE SEAS Undergraduate Papers in Linguistics},
   publisher = {ELTE {S}chool of {E}nglish and {A}merican {S}tudies},
   city = {Budapest},
   pages= {87--93}, 

If you use some specialized version for Hungarian, please also cite the following paper:

Dóra Csendes, János Csirik, Tibor Gyimóthy and András Kocsor (2005): The Szeged Treebank. In: Text, Speech and Dialogue. Lecture Notes in Computer Science Volume 3658/2005, Springer: Berlin. pp. 123-131.

   author={Csendes, D{\'o}ra and Csirik, J{\'a}nos and Gyim{\'o}thy, Tibor and Kocsor, Andr{\'a}s},
   title={The {S}zeged {T}reebank},
   booktitle={Lecture Notes in Computer Science: Text, Speech and Dialogue},


a sequential tagger for NLP using Maximum Entropy Learning and Hidden Markov Models







No releases published


No packages published

Contributors 4