Skip to content

A fast sentence/word tokenizer, and punctuation remover.

License

Notifications You must be signed in to change notification settings

saeeddhqan/doc2term

 
 

Repository files navigation

doc2term

Build Status license

A fast NLP tokenizer that detects sentences, words, numbers, urls, hostnames, emails, filenames, dates, and phone numbers. Tokenize integrates and standardize the documents, remove the punctuations and duplications.

Installation

git clone https://github.com/callforpapers-source/doc2term
cd doc2term
python setup.py install

Compilation

The installation requires to compile the original C code using gcc.

Usage

Example notebook: doc2term

Example

>>> import doc2term

>>> doc2term.doc2term_str("Actions speak louder than words. ... ")
"Actions speak louder than words ."
>>> doc2term.doc2term_str("You can't judge a book by its cover. ... from thoughtcatalog.com")
"You can't judge a book by its cover . from"

>>> doc2term.doc2term_str("You can't judge a book by its cover. ... from thoughtcatalog.com", include_hosts_files=1)
"You can't judge a book by its cover . from thoughtcatalog.com"

About

A fast sentence/word tokenizer, and punctuation remover.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • C 89.8%
  • Python 5.5%
  • Lex 3.2%
  • Makefile 1.5%