Chartokenizer is a Python package for basic character-level tokenization. It provides functionality to generate a character-to-index mapping for tokenizing strings at the character level. This can be useful in various natural language processing (NLP) tasks where text data needs to be preprocessed for analysis or modeling.
- Generates a character-to-index mapping for tokenizing strings.
text = "This is a Demo Text."
# When tokenized using chartokenizer
{' ': 0, '.': 1, 'D': 2, 'T': 3, 'a': 4, 'e': 5, 'h': 6, 'i': 7, 'm': 8, 'o': 9, 's': 10, 't': 11, 'x': 12}
- Supports both custom character sets and a predefined classic character set.
# Predefined_classic_character_set
r" !#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ^_`abcdefghijklmnopqrstuvwxyz{|}~"
- Provides tokenization and detokenization functions.
# For predefined character set
"hello" => tokenize => [68, 65, 72, 72, 75]
[68, 65, 72, 72, 75] => detokenize => "hello"
- Allows saving and loading the character-to-index mapping dictionary to/from a file.
- Supports padding or truncating tokenized sequences to a fixed length.
# Padding "hello" to length of 10 with values of 0
"hello" => tokenize => [68, 65, 72, 72, 75] => pad_sequence => [68, 65, 72, 72, 75, 0, 0, 0, 0, 0]
chartokenizer
is available as a PyPi package
You can install via pip:
pip install chartokenizer
from chartokenizer import Tokenizer
# Initialize the tokenizer
tokenizer = Tokenizer()
# Generate character-to-index mapping dictionary
dictionary = tokenizer.initialize(string="your_text_here")
# Tokenize a string
tokens = tokenizer.tokenize(dictionary, "your_text_here")
# Detokenize tokens back to string
text = tokenizer.detokenize(dictionary, tokens)
For more detailed usage and options, refer to the documentation
Contributions are welcome! If you encounter any issues or have suggestions for improvements, please feel free to open an issue or submit a pull request on GitHub.
Released under Apache by @MrTechyWorker.
- This package was inspired by the need for a simple and efficient character-level tokenizer in natural language processing tasks.
Learn, Build, Develop !! 😉