New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hugging Face Hub example is factually incorrect #2802
Comments
This was also the resulst I got from running the example: "The Seattle Seahawks won the Super Bowl in 2010. Justin Beiber was born in 2010. The final answer: Seattle Seahawks." This seems to be due to google's flan model because when used with other models like bloom or gpt, the answer is accurate |
#2810 |
### #2802 It appears that Google's Flan model may not perform as well as other models, I used a simple example to get factually correct answer.
### langchain-ai/langchain#2802 It appears that Google's Flan model may not perform as well as other models, I used a simple example to get factually correct answer.
### langchain-ai#2802 It appears that Google's Flan model may not perform as well as other models, I used a simple example to get factually correct answer.
I have used MPT-7 and it tells me the same |
Yeah the Huggingface model isn't as reliable! |
are there any other open source models, I have tried all of the ones on gpt4all many commercial on hugging face none are as good as compared to text davinchi or gpt-3 |
You could try the falcon ones on huggingface. It really depends on your use case. While some models are fine for certain tasks, in general, we haven't seen a serious open source competitor yet over all possible use cases. OpenAI's raison d'etre is to make models that are better than other providers and has a head start. Most of the open source models right now are either more domain specific (chit chat, summarization, etc.) or pretrained / non-finetuned |
I was having fun trying them out I was going to fine-tune some but now that
loangchain Is there it is not quite a problem, I have played around with
many models and I find that the best one is MPT-7 it is the best one for me
according to my testing.(my testing means simply checking if the model can
generate a tweet)
… Message ID: ***@***.***>
|
Hi, @JasonWeill. I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale. Based on my understanding of the issue, you reported an error on the Hugging Face Hub example page regarding the Super Bowl winner in the year Justin Bieber was born. The page incorrectly states that the Seattle Seahawks won in 2010, when in fact it was the Dallas Cowboys in 1994. Several users, including cnhhoang850, azamiftikhar1000, Majboor, and vowelparrot, have confirmed the issue and discussed alternative models. Majboor mentioned that MPT-7 was the best model for their testing. Before we close this issue, we wanted to check if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on this issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days. Thank you for your contribution, and we appreciate your understanding as we work to manage our backlog effectively. Let us know if you have any further questions or concerns. |
On the Hugging Face Hub example page, the question is, "What NFL team won the Super Bowl in the year Justin Beiber [sic] was born?" The answer is, "The Seattle Seahawks won the Super Bowl in 2010. Justin Beiber was born in 2010. The final answer: Seattle Seahawks."
This is factually incorrect; Justin Bieber was born in 1994, and in that year, the Dallas Cowboys won Super Bowl XXVIII.
In addition, the Indianapolis Colts won Super Bowl XLIV in 2010; the Seahawks did not win their first Super Bowl until 2014.
The text was updated successfully, but these errors were encountered: