A part of RuMor project. It contains pipeline for preprocessing and tokenization texts in Russian. Also, it includes preliminary entity tagging. Highlights are:
- Extracting emojis, emails, dates, phones, urls, html/xml fragments etc.
- Tagging/removing tokens with unallowed symbols
- Normalizing punctuation
- Tokenization (via NLTK)
- Russan Wikipedia tokenizer
- brat annotations support
Toxine supports Python 3.5 or later. To install it via pip, run:
$ pip install toxine
If you currently have a previous version of Toxine installed, use:
$ pip install toxine -U
Alternatively, you can also install Toxine from source of this git repository:
$ git clone https://github.com/fostroll/toxine.git
$ cd toxine
$ pip install -e .
This gives you access to examples that are not included to the PyPI package.
Toxine uses NLTK with punkt data downloaded. If you didn't do it yet, start Python interpreter and execute:
>>> import nltk
>>> nltk.download('punkt')
NB: If you plan to use methods for brat annotations renewal, you need to install the python-Levenshtein library. See more on the brat annotations support page.
Wrapper for tokenized Wikipedia
You can find them in the directory examples
of our Toxine github
repository.
Toxine is released under the BSD License. See the LICENSE file for more details.