We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
如题,我在使用llama.cpp运行百川2 13B,发现多轮对话的分隔符在百川官方pytorch版本的demo中是直接用generation_config.json里配置好的,而不是用明文拼接后tokenizer自动生成的。 有办法不做一些额外的处理(比如改llama.cpp代码,自己增加词表映射等)来处理正确的拼接多轮对话的prompt吗?
The text was updated successfully, but these errors were encountered:
已经解决。分别是 <reserved_106> <reserved_107>
Sorry, something went wrong.
@greyamber 你好,请问是否可以给个多轮对话prompt字符串形式的例子么?多谢!
No branches or pull requests
如题,我在使用llama.cpp运行百川2 13B,发现多轮对话的分隔符在百川官方pytorch版本的demo中是直接用generation_config.json里配置好的,而不是用明文拼接后tokenizer自动生成的。
有办法不做一些额外的处理(比如改llama.cpp代码,自己增加词表映射等)来处理正确的拼接多轮对话的prompt吗?
The text was updated successfully, but these errors were encountered: