Evaluation

S Wade edited this page Sep 9, 2015 · 9 revisions
Clone this wiki locally

Named Entity Recognition Performance

The English model file that comes with MITIE gets an F1 score of 88.10 on the CoNLL 2003 NER task. This is measured on the eng.testb test dataset. The model was trained on the train/val data from the CoNLL 2003 task.

These results are comparable to other state-of-the-art NER systems like the Stanford NER tool.

The Spanish NER model gets a lower F1 score of 80.62 on the CoNLL 2002 NER task. So its accuracy is noticeably worse than the English model. However, this is comparable to other state-of-the-art NER models for Spanish.

Named Entity Recognition Speed

MITIE's NER implementation is designed for bulk data processing at high speeds. We measured this by running part of the English Gigaword corpus through MITIE and measuring the total processing time. On a 2.40GHz Intel Xeon (E5-2665, single-threaded), MITIE is able to process 53,600 words per second. This is significantly faster than most research NER systems available via open source licenses.

A benchmark of speed/performance can be found here: http://gbowyer.freeshell.org/ner-perf.png.