Skip to content

retarfi/jptranstokenizer

Repository files navigation

jptranstokenizer: Japanese Tokenzier for transformers

Python pypi GitHub release License Test codecov

This is a repository for japanese tokenizer with HuggingFace library.
You can use JapaneseTransformerTokenizer like transformers.BertJapaneseTokenizer.
issue は日本語でも大丈夫です。

Documentations

Documentations are available on readthedoc.

Install

pip install jptranstokenizer

Quickstart

This is the example to use jptranstokenizer.JapaneseTransformerTokenizer with sentencepiece model of nlp-waseda/roberta-base-japanese and Juman++.
Before the following steps, you need to install pyknp and Juman++.

>>> from jptranstokenizer import JapaneseTransformerTokenizer
>>> tokenizer = JapaneseTransformerTokenizer.from_pretrained("nlp-waseda/roberta-base-japanese")
>>> tokens = tokenizer.tokenize("外国人参政権")
# tokens: ['▁外国', '▁人', '▁参政', '▁権']

Note that different dependencies are required depending on the type of tokenizer you use.
See also Quickstart on Read the Docs

Citation

There will be another paper. Be sure to check here again when you cite.

This Implementation

@inproceedings{Suzuki-2023-nlp,
  jtitle = {{異なる単語分割システムによる日本語事前学習言語モデルの性能評価}},
  title = {{Performance Evaluation of Japanese Pre-trained Language Models with Different Word Segmentation Systems}},
  jauthor = {鈴木, 雅弘 and 坂地, 泰紀 and 和泉, 潔},
  author = {Suzuki, Masahiro and Sakaji, Hiroki and Izumi, Kiyoshi},
  jbooktitle = {言語処理学会 第29回年次大会 (NLP2023)},
  booktitle = {29th Annual Meeting of the Association for Natural Language Processing (NLP)},
  year = {2023},
  pages = {894-898}
}

Related Work