We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
COT(Chain of thought prompting elicits reasoning in large language models)这篇文章应该是Prompt Engineering的Multi-Stage方法吧,我看归类到了Single-Stage下面
The text was updated successfully, but these errors were encountered:
您好,single-stage和multi-stage的划分标准本质上是once input-output和multi-times input-output,CoT本质上还是模型一次输入输出完成的,因此是single-stage类的方法
Sorry, something went wrong.
不好意思,之前写错了,应该是zero-COT(Large Language Models are Zero-Shot Reasoners)这篇文章,它是通过两阶段的input-output得到最终的结果,应该是Multi-Stage方法吧,我看归类到了Single-Stage下面
感谢您的问题!我们当时关注的是他生成rationale的步骤 至于答案获取的步骤通过做实验可以发现其实是不需要的 它只是为了让模型能够按照特定的格式生成答案(看论文中的例子也可以知道) 所以索性把它归到了single-stage 因为它的两个阶段跟multi-stage类的论文相比不太具有代表性
好的,感谢,明白了
No branches or pull requests
COT(Chain of thought prompting elicits reasoning in large language models)这篇文章应该是Prompt Engineering的Multi-Stage方法吧,我看归类到了Single-Stage下面
The text was updated successfully, but these errors were encountered: