We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
按照指示搭建环境,下载各种文件后直接跑和readme里一样的例子,唯一改变的是把use_cuda = True 改成False。尝试了很多改变都会回到这个key error错误信息:
代码:
my_senta.init_model(model_class="ernie_1.0_skep_large_ch", task="sentiment_classify", use_cuda=use_cuda) texts = ["中山大学是岭南第一学府"] result = my_senta.predict(texts) print(result)
my_senta.init_model(model_class="ernie_1.0_skep_large_ch", task="aspect_sentiment_classify", use_cuda=use_cuda) texts = ["百度是一家高科技公司"] aspects = ["百度"] result = my_senta.predict(texts, aspects) print(result)
my_senta.init_model(model_class="ernie_1.0_skep_large_ch", task="extraction", use_cuda=use_cuda) texts = ["唐 家 三 少 , 本 名 张 威 。"] result = my_senta.predict(texts, aspects) print(result)
KeyError Traceback (most recent call last) in 10 # 预测中文句子级情感分类任务 11 ---> 12 my_senta.init_model(model_class="ernie_1.0_skep_large_ch", task="sentiment_classify", use_cuda=use_cuda) 13 texts = ["中山大学是岭南第一学府"] 14 result = my_senta.predict(texts)
/opt/anaconda3/envs/hsbc/lib/python3.7/site-packages/senta/train.py in init_model(self, model_class, task, use_cuda) 222 tokenizer_params["bpe_json_file"] = _get_abs_path(bpe_j_file) 223 --> 224 tokenizer_class = RegisterSet.tokenizer.getitem(tokenizer_name) 225 self.tokenizer = tokenizer_class(vocab_file=tokenizer_vocab_path, 226 split_char=" ",
/opt/anaconda3/envs/hsbc/lib/python3.7/site-packages/senta/common/register.py in getitem(self, key) 45 except Exception as e: 46 logging.error("module {key} not found: {e}") ---> 47 raise e 48 49 def contains(self, key):
/opt/anaconda3/envs/hsbc/lib/python3.7/site-packages/senta/common/register.py in getitem(self, key) 42 def getitem(self, key): 43 try: ---> 44 return self._dict[key] 45 except Exception as e: 46 logging.error("module {key} not found: {e}")
KeyError: 'FullTokenizer'
请问是哪里出错了呢?
The text was updated successfully, but these errors were encountered:
想出来了,不好意思
Sorry, something went wrong.
请问这里的问题应该怎么办呢
请问您怎样解决的
No branches or pull requests
按照指示搭建环境,下载各种文件后直接跑和readme里一样的例子,唯一改变的是把use_cuda = True 改成False。尝试了很多改变都会回到这个key error错误信息:
代码:
预测中文句子级情感分类任务
my_senta.init_model(model_class="ernie_1.0_skep_large_ch", task="sentiment_classify", use_cuda=use_cuda)
texts = ["中山大学是岭南第一学府"]
result = my_senta.predict(texts)
print(result)
预测中文评价对象级的情感分类任务
my_senta.init_model(model_class="ernie_1.0_skep_large_ch", task="aspect_sentiment_classify", use_cuda=use_cuda)
texts = ["百度是一家高科技公司"]
aspects = ["百度"]
result = my_senta.predict(texts, aspects)
print(result)
预测中文观点抽取任务
my_senta.init_model(model_class="ernie_1.0_skep_large_ch", task="extraction", use_cuda=use_cuda)
texts = ["唐 家 三 少 , 本 名 张 威 。"]
result = my_senta.predict(texts, aspects)
print(result)
KeyError Traceback (most recent call last)
in
10 # 预测中文句子级情感分类任务
11
---> 12 my_senta.init_model(model_class="ernie_1.0_skep_large_ch", task="sentiment_classify", use_cuda=use_cuda)
13 texts = ["中山大学是岭南第一学府"]
14 result = my_senta.predict(texts)
/opt/anaconda3/envs/hsbc/lib/python3.7/site-packages/senta/train.py in init_model(self, model_class, task, use_cuda)
222 tokenizer_params["bpe_json_file"] = _get_abs_path(bpe_j_file)
223
--> 224 tokenizer_class = RegisterSet.tokenizer.getitem(tokenizer_name)
225 self.tokenizer = tokenizer_class(vocab_file=tokenizer_vocab_path,
226 split_char=" ",
/opt/anaconda3/envs/hsbc/lib/python3.7/site-packages/senta/common/register.py in getitem(self, key)
45 except Exception as e:
46 logging.error("module {key} not found: {e}")
---> 47 raise e
48
49 def contains(self, key):
/opt/anaconda3/envs/hsbc/lib/python3.7/site-packages/senta/common/register.py in getitem(self, key)
42 def getitem(self, key):
43 try:
---> 44 return self._dict[key]
45 except Exception as e:
46 logging.error("module {key} not found: {e}")
KeyError: 'FullTokenizer'
请问是哪里出错了呢?
The text was updated successfully, but these errors were encountered: