Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

大神你好,请问怎么改造成你在open domain试用那儿的问答系统的效果呢? #39

Closed
sheiscute opened this issue Jul 28, 2021 · 13 comments

Comments

@sheiscute
Copy link

如题,想请教训练完之后如何做成问答系统。

@wywzxxz
Copy link

wywzxxz commented Aug 28, 2021

+1,复现的满头是包,跟DEMO输出就是对不上……

@wywzxxz
Copy link

wywzxxz commented Aug 28, 2021

from transformers import QuestionAnsweringPipeline
qap=QuestionAnsweringPipeline(model,tokenizer)

消肿了

@yan1990y
Copy link

yan1990y commented Mar 8, 2022

from transformers import QuestionAnsweringPipeline
qap=QuestionAnsweringPipeline(model,tokenizer)

消肿了

请问一下后面如何传参啊

@yan1990y
Copy link

yan1990y commented Mar 8, 2022

from transformers import QuestionAnsweringPipeline
qap=QuestionAnsweringPipeline(model,tokenizer)

消肿了

我试了一下,运行没问题,但是结果很有问题。
question="我今天吃了什么"
text="今天天气很好,我出门散步,吃了一顿肯德基"
ans = qap(
{'question':question,
'context': text}
)
print(ans)

答案是“{'score': 0.940702007202276, 'start': 0, 'end': 19, 'answer': '今天天气很好,我出门散步,吃了一顿肯德基'}”
反正效果远不如open domain那个,是不是需要继续训练?

@wywzxxz
Copy link

wywzxxz commented Mar 8, 2022

from transformers import QuestionAnsweringPipeline
qap=QuestionAnsweringPipeline(model,tokenizer)

消肿了

我试了一下,运行没问题,但是结果很有问题。 question="我今天吃了什么" text="今天天气很好,我出门散步,吃了一顿肯德基" ans = qap( {'question':question, 'context': text} ) print(ans)

答案是“{'score': 0.940702007202276, 'start': 0, 'end': 19, 'answer': '今天天气很好,我出门散步,吃了一顿肯德基'}” 反正效果远不如open domain那个,是不是需要继续训练?

from transformers import AutoTokenizer, AutoModelForQuestionAnswering

model_name = "chinese_pretrain_mrc_roberta_wwm_ext_large"
tokenizer = AutoTokenizer.from_pretrained(f"luhua/{model_name}")
model = AutoModelForQuestionAnswering.from_pretrained(f"luhua/{model_name}")

from transformers import QuestionAnsweringPipeline
qap=QuestionAnsweringPipeline(model,tokenizer)

question="我今天吃了什么"
text="今天天气很好,我出门散步,吃了一顿肯德基"
ans = qap(
{'question':question,
'context': text}
)
print(ans)

{'score': 0.9409229755401611, 'start': 17, 'end': 20, 'answer': '肯德基'}

Open Domain: 肯德基 0.941

@yan1990y
Copy link

yan1990y commented Mar 8, 2022

chinese_pretrain_mrc_roberta_wwm_ext_large

我是直接git clone下载的那个文件夹,里面有pytorch_model.bin还有其他的文件,是不是可以改成本地直接调用?因为我按照您写的代码,会出现
Can't load 'luhua/chinese_pretrain_mrc_roberta_wwm_ext_large'. Make sure that:

  • 'luhua/chinese_pretrain_mrc_roberta_wwm_ext_large' is a correct model identifier listed on 'https://huggingface.co/models'

  • or 'luhua/chinese_pretrain_mrc_roberta_wwm_ext_large' is the correct path to a directory containing a 'config.json' file

@yan1990y
Copy link

yan1990y commented Mar 8, 2022

打开梯子能下载了,也确实有效。看来git下载到本地的没用,还是要这样搞

@Djokovic0311
Copy link

打开梯子能下载了,也确实有效。看来git下载到本地的没用,还是要这样搞

请问您打开梯子下载的是什么呢?谢谢!

@yan1990y
Copy link

打开梯子能下载了,也确实有效。看来git下载到本地的没用,还是要这样搞

请问您打开梯子下载的是什么呢?谢谢!

当然是模型了

@xiuzhilu
Copy link

from transformers import QuestionAnsweringPipeline
qap=QuestionAnsweringPipeline(model,tokenizer)

消肿了

我试了一下,运行没问题,但是结果很有问题。 question="我今天吃了什么" text="今天天气很好,我出门散步,吃了一顿肯德基" ans = qap( {'question':question, 'context': text} ) print(ans)
答案是“{'score': 0.940702007202276, 'start': 0, 'end': 19, 'answer': '今天天气很好,我出门散步,吃了一顿肯德基'}” 反正效果远不如open domain那个,是不是需要继续训练?

from transformers import AutoTokenizer, AutoModelForQuestionAnswering

model_name = "chinese_pretrain_mrc_roberta_wwm_ext_large"
tokenizer = AutoTokenizer.from_pretrained(f"luhua/{model_name}")
model = AutoModelForQuestionAnswering.from_pretrained(f"luhua/{model_name}")

from transformers import QuestionAnsweringPipeline
qap=QuestionAnsweringPipeline(model,tokenizer)

question="我今天吃了什么"
text="今天天气很好,我出门散步,吃了一顿肯德基"
ans = qap(
{'question':question,
'context': text}
)
print(ans)

{'score': 0.9409229755401611, 'start': 17, 'end': 20, 'answer': '肯德基'}

Open Domain: 肯德基 0.941

自己下载的模型与huggingface API给出的结果不一样这个问题你解决了吗?是去微调模型还是怎么做呢 @yan1990y

@Silencioo
Copy link

from transformers import QuestionAnsweringPipeline
qap=QuestionAnsweringPipeline(model,tokenizer)

消肿了

我试了一下,运行没问题,但是结果很有问题。 question="我今天吃了什么" text="今天天气很好,我出门散步,吃了一顿肯德基" ans = qap( {'question':question, 'context': text} ) print(ans)
答案是“{'score': 0.940702007202276, 'start': 0, 'end': 19, 'answer': '今天天气很好,我出门散步,吃了一顿肯德基'}” 反正效果远不如open domain那个,是不是需要继续训练?

from transformers import AutoTokenizer, AutoModelForQuestionAnswering

model_name = "chinese_pretrain_mrc_roberta_wwm_ext_large"
tokenizer = AutoTokenizer.from_pretrained(f"luhua/{model_name}")
model = AutoModelForQuestionAnswering.from_pretrained(f"luhua/{model_name}")

from transformers import QuestionAnsweringPipeline
qap=QuestionAnsweringPipeline(model,tokenizer)

question="我今天吃了什么"
text="今天天气很好,我出门散步,吃了一顿肯德基"
ans = qap(
{'question':question,
'context': text}
)
print(ans)

{'score': 0.9409229755401611, 'start': 17, 'end': 20, 'answer': '肯德基'}
Open Domain: 肯德基 0.941

自己下载的模型与huggingface API给出的结果不一样这个问题你解决了吗?是去微调模型还是怎么做呢 @yan1990y

请问找到原因了么

@yan1990y
Copy link

from transformers import QuestionAnsweringPipeline
qap=QuestionAnsweringPipeline(model,tokenizer)

消肿了

我试了一下,运行没问题,但是结果很有问题。 question="我今天吃了什么" text="今天天气很好,我出门散步,吃了一顿肯德基" ans = qap( {'question':question, 'context': text} ) print(ans)
答案是“{'score': 0.940702007202276, 'start': 0, 'end': 19, 'answer': '今天天气很好,我出门散步,吃了一顿肯德基'}” 反正效果远不如open domain那个,是不是需要继续训练?

from transformers import AutoTokenizer, AutoModelForQuestionAnswering

model_name = "chinese_pretrain_mrc_roberta_wwm_ext_large"
tokenizer = AutoTokenizer.from_pretrained(f"luhua/{model_name}")
model = AutoModelForQuestionAnswering.from_pretrained(f"luhua/{model_name}")

from transformers import QuestionAnsweringPipeline
qap=QuestionAnsweringPipeline(model,tokenizer)

question="我今天吃了什么"
text="今天天气很好,我出门散步,吃了一顿肯德基"
ans = qap(
{'question':question,
'context': text}
)
print(ans)

{'score': 0.9409229755401611, 'start': 17, 'end': 20, 'answer': '肯德基'}
Open Domain: 肯德基 0.941

自己下载的模型与huggingface API给出的结果不一样这个问题你解决了吗?是去微调模型还是怎么做呢 @yan1990y

看后面回复即知

@yan1990y
Copy link

from transformers import QuestionAnsweringPipeline
qap=QuestionAnsweringPipeline(model,tokenizer)

消肿了

我试了一下,运行没问题,但是结果很有问题。 question="我今天吃了什么" text="今天天气很好,我出门散步,吃了一顿肯德基" ans = qap( {'question':question, 'context': text} ) print(ans)
答案是“{'score': 0.940702007202276, 'start': 0, 'end': 19, 'answer': '今天天气很好,我出门散步,吃了一顿肯德基'}” 反正效果远不如open domain那个,是不是需要继续训练?

from transformers import AutoTokenizer, AutoModelForQuestionAnswering

model_name = "chinese_pretrain_mrc_roberta_wwm_ext_large"
tokenizer = AutoTokenizer.from_pretrained(f"luhua/{model_name}")
model = AutoModelForQuestionAnswering.from_pretrained(f"luhua/{model_name}")

from transformers import QuestionAnsweringPipeline
qap=QuestionAnsweringPipeline(model,tokenizer)

question="我今天吃了什么"
text="今天天气很好,我出门散步,吃了一顿肯德基"
ans = qap(
{'question':question,
'context': text}
)
print(ans)

{'score': 0.9409229755401611, 'start': 17, 'end': 20, 'answer': '肯德基'}
Open Domain: 肯德基 0.941

自己下载的模型与huggingface API给出的结果不一样这个问题你解决了吗?是去微调模型还是怎么做呢 @yan1990y

请问找到原因了么

看后面回复

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants