Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

能否提供一些模型数据 #65

Open
Godlikemandyy opened this issue Jan 26, 2021 · 0 comments
Open

能否提供一些模型数据 #65

Godlikemandyy opened this issue Jan 26, 2021 · 0 comments

Comments

@Godlikemandyy
Copy link

大佬,你好:
尝试跑了一下elt_span_transformers发现报了一些错误:
2021-01-26 14:49:24,295 - transformers.tokenization_utils - INFO - Model name 'transformer_model_path' not found in model shortcut name list (bert-base-uncased, bert-large-uncased, ber
t-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-lar
ge-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-germa
n-dbmdz-cased, bert-base-german-dbmdz-uncased). Assuming 'transformer_model_path' is a path or url to a directory containing tokenizer files.
2021-01-26 14:49:24,295 - transformers.tokenization_utils - INFO - Didn't find file transformer_model_path. We won't load it.
2021-01-26 14:49:24,296 - transformers.tokenization_utils - INFO - Didn't find file transformer_model_path\added_tokens.json. We won't load it.
2021-01-26 14:49:24,296 - transformers.tokenization_utils - INFO - Didn't find file transformer_model_path\special_tokens_map.json. We won't load it.
2021-01-26 14:49:24,296 - transformers.tokenization_utils - INFO - Didn't find file transformer_model_path\tokenizer_config.json. We won't load it.
Traceback (most recent call last):
File "run/relation_extraction/etl_span_transformers/main.py", line 148, in
main()
File "run/relation_extraction/etl_span_transformers/main.py", line 129, in main
tokenizer = BertTokenizer.from_pretrained(args.bert_model, do_lower_case=True)
File "D:\Anaconda3\envs\deepie\lib\site-packages\transformers\tokenization_utils.py", line 283, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "D:\Anaconda3\envs\deepie\lib\site-packages\transformers\tokenization_utils.py", line 347, in _from_pretrained
list(cls.vocab_files_names.values())))
OSError: Model name 'transformer_model_path' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingu
al-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whol
e-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased). We a
ssumed 'transformer_model_path' was a path or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.
能否提供一些模型数据呢?多谢

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant