Hlm at your fingertips
This repo currently just has
340-脂砚斋重批红楼梦.txt
: hlm_zhdavid-hawks-the-story-of-the-stone.txt
: hlm_enyang-hlm.txt
: hlm_en1joly-hlm.txt
: hlm_en2
It may be expanded to other versions. If you wish to have one particular version included, make a pull request
(fork, upload files and click the pull request button) or provide a text file to me.
hlm_texts
depends on polyglot that in turn depends on libicu
To install libicu
E.g.
- Ubuntu:
sudo apt install libicu-dev
- Centos:
yum install libicu
- OSX:
brew install icu4c
Then use poetry
or pip
to install PyICU pycld2 Morfessor
, e.g.
pip install PyICU pycld2 Morfessor
or
poetry add PyICU pycld2 Morfessor
Download and install the pyicu
and pycld2
(possibly also Morfessor
) whl packages for your OS/Python version from https://www.lfd.uci.edu/~gohlke/pythonlibs/#pyicu and https://www.lfd.uci.edu/~gohlke/pythonlibs/#pycld2 (possibly also Morfessor https://www.lfd.uci.edu/~gohlke/pythonlibs/)
pip install hlm-texts
# pip install hlm-texts -U # to upgrade to the newest version
or install the newest version
pip install git+https://github.com/ffreemt/hlm-texts
or git clone the repo and install from the source
git clone
cd hlm-texts
pip install -r requirements.text
from hlm_texts import hlm_en, hlm_zh, hlm_en1, hlm_en2
hlm_zh
: 340-脂砚斋重批红楼梦.txthlm_en
: david-hawks-the-story-of-the-stone.txthlm_en1
: yang-hlm.txthlm_en2
: joly-hlm.txt
with blank lines removed and paragraphs retaind.
for tokenizing text or list of texts into sentences
from hlm_texts import sent_tokenizer, hlm_en
hlm_en_sents = sent_tokenizer(hlm_en, lang="en")
Tokenizing long English texts for the first time can take a while (3-5 minutes for hlm_en, hlm_en1, hlm_en2). Subsequent operations are, however, instant since sent_tokenizer
is cached in ~/joblib_cache
(\Users\xyz\joblib_cachefor
Windows 10`).
The repo is for study purpose only. If you believe that your interests have been violated in any way, please let me know. I'll promptly follow it up with appropriate actions.