New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Help on using the model for finetuning #13
Comments
The subword tokenization is done with WordPiece algorithm, samely as in the original BERT models by Google. We have an example usage of a pretrained model with a tokenizer here (it doesn't contain much information, though): |
Hi, I am looking at your example Can I assume the tokenizer created with: BertJapaneseTokenizer.from_pretrained('bert-base-japanese-whole-word-masking') already include tokenization (word segementation) by Mecab so that I dont need to process the input by Mecab first? How about sentence splitting? If my input is just a passage not broken down into sentences split by newline, I need to apply a sentence splitter before sending the input to tokenizer created with BertJapaneseTokenizer.from_pretrained('bert-base-japanese-whole-word-masking'), right? |
Yes, BertJapaneseTokenizer includes word segmentation so you don't need to split a text into words by yourself. On the other hand, You will have to split a text into sentences by yourself. |
The part about tokenizing with Mecab is clear but what about the sub-word tokenization? And what if there are words found in the data used for finetuning but not found in the data used for pretraining? Some guide on using your pretrained model would be great.
The text was updated successfully, but these errors were encountered: