You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to replicate the BERT baseline for downstream tasks. Is it possible to load BERT pre-trained instead of ERNIE pre-trained model in the fine-tuning code? If not, can you provide some pointers to the code you used for baseline?
I pointed the --ernie-model to BERT pre-trained but I got this error
Traceback (most recent call last):
File "code/run_typing.py", line 573, in main()
File "code/run_typing.py", line 511, in main
train_examples, label_list, args.max_seq_length, tokenizer_label, tokenizer, args.threshold)
File "code/run_typing.py", line 168, in convert_examples_to_features
tokens_a, entities_a = tokenizer_label.tokenize(ex_text_a, [h])
AttributeError: 'NoneType' object has no attribute 'tokenize'
Please let me know if I miss something here?
Thank you!
June
The text was updated successfully, but these errors were encountered:
Hi 👋,
Thank you for the great work!
I'm trying to replicate the BERT baseline for downstream tasks. Is it possible to load BERT pre-trained instead of ERNIE pre-trained model in the fine-tuning code? If not, can you provide some pointers to the code you used for baseline?
I pointed the
--ernie-model
to BERT pre-trained but I got this errorPlease let me know if I miss something here?
Thank you!
June
The text was updated successfully, but these errors were encountered: