Skip to content

A simple and fast rule-based sentence segmentation. Tested on OpenCorpora and SynTagRus datasets.

License

Notifications You must be signed in to change notification settings

deeppavlov/ru_sentence_tokenizer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ru_sent_tokenize

A simple and fast rule-based sentence segmentation. Tested on OpenCorpora and SynTagRus datasets.

Installation

pip install rusenttokenize

Running

>>> from rusenttokenize import ru_sent_tokenize
>>> ru_sent_tokenize('Эта шоколадка за 400р. ничего из себя не представляла. Артём решил больше не ходить в этот магазин')
['Эта шоколадка за 400р. ничего из себя не представляла.', 'Артём решил больше не ходить в этот магазин']

Metrics

The tokenizer has been tested on OpenCorpora and SynTagRus. There are two important metrics.

Precision. First one is we took single sentences from the datasets and measured how many times tokenizer didn't split them.

Recall. Second metric is we took two consecutive sentences from the datasets and joined each pair with a space characted. We measured how many times tokenizer correctly splitted a long sentence into two.

tokenizer OpenCorpora SynTagRus
Precision Recall Execution Time (sec) Precision Recall Execution Time (sec)
nltk.sent_tokenize 94.30 86.06 8.67 98.15 94.95 5.07
nltk.sent_tokenize(x, language='russian') 95.53 88.37 8.54 98.44 95.45 5.68
bureaucratic-labs.segmentator.split 97.16 88.62 359 96.79 92.55 210
ru_sent_tokenize 98.73 93.45 4.92 99.81 98.59 2.87

Notebook shows how the table above was calculated

About

A simple and fast rule-based sentence segmentation. Tested on OpenCorpora and SynTagRus datasets.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published