You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 23, 2024. It is now read-only.
We don't support this feature right now. Maybe it will be added later.
You can use the following workaround. Just append this special token in the end of your training text many times (for example 1000). In this case the model will definitely add it to vocabulary.
We would also find this option very useful 👍 The workaround to have 1000 special tokens at the beginning is fine if there are just a few tokens, but is inconvenient when we want to provide a large set of words that should never be split.
I want to use this yttm model.
However, I want to add [MASK] token to the vocabulary.
In this case, How can I predefine special tokens?
The text was updated successfully, but these errors were encountered: