New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
the added special tokens seem to convert to ids uncorrectly with my chinese datasets #36
Comments
Yes, our ALBERT experiments are based on |
Hi, I just want to check back and see whether you have resolved the issue. |
Hi, I also had this problem after trying to train a model with Bert base. Could you please answer me the following questions:
|
@hero-png Thanks for your questions!
|
@a3616001 Thank you very much for your quick reply! |
Have you tried the argument of add_new_tokens? I used bert-base-chinese to train my chinese dataset, and the number of entity types exceeds the total [unused], after applying add_marker_tokens function, the vocab size and tokenizer length both added from 21128 to 21300. Then, I applied the origin BERT model, and applied resize_token_embeddings(), however, I found that the special token, such as <subj_start=地点>, <obj_start=人物>, etc, are all represented as 100, namely [UNK] in the sequence of input_ids after applying tokenizer.convert_tokens_to_ids(). Since the different marker tokens should have different id according to the paper, could you please help me to solve this problem?
The text was updated successfully, but these errors were encountered: