-
-
Notifications
You must be signed in to change notification settings - Fork 194
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot specify role: system in LLM::Anthropic #603
Comments
@kokuyouwind Thank you for this proposal! I'm curious do you have a need for this in your applications? Would this make passing the system role easier for you? |
Thanks for the reply.
I am building a development tool that will allow users to freely configure their preferred LLM. To avoid this, one of the following actions is needed.
If this proposal is accepted, the tool does not need to care about the LLM type and can be implemented simply. |
@kokuyouwind Thank you for your PR and using this library in your gem 😄 I'd like to actually think through this after the Langchain::Assistant Anthropic support is added here: #543. We would need to add an AnthropicMessage class like this one: https://github.com/patterns-ai-core/langchainrb/pull/513/files#diff-86baf19d3db04ca4b773792c27230e17bb4ba4f9373d17688b8a2f67de6f9c28 |
@andreibondarev Personally, I would be happy if it could be used in cases where Assistant is not used (cases where LLM::XXXClient#chat is used directly, or where a chain of RAGs, QA bots, etc. is set up). |
You may close this Issue and pull request #604, as I have resolved all of my original issues by bringing them all to the user message. If you would consider it, can we start another Issue as “Separating implementations not related to tools from Assistant”? |
@kokuyouwind I've been thinking that the message_1 = Langchain::Messages::AnthropicMessage.new(role:"user", content:"hi!")
message_2 = Langchain::Messages::AnthropicMessage.new(role:"assistant", content:"Hey! How can I help?")
message_3 = Langchain::Messages::AnthropicMessage.new(role:"assistant", content:"Help me debug my computer")
Langchain::LLM::Anthropic.new(...).chat(messages: [message_1, message_2, message_3]) |
@andreibondarev message_1 = Langchain::Messages::UserMessage.new("hi!")
message_2 = Langchain::Messages::AssistantMessage.new("Hey! How can I help?")
message_3 = Langchain::Messages::AssistantMessage.new("Help me debug my computer")
Langchain::LLM::Anthropic.new(...).chat(messages: [message_1, message_2, message_3]) The class names above are aligned with the role notation, but could be aligned with Python's LangChain Messages, such as |
I did not notice that a discussion forum was created in #629. |
Description
In OpenAI and Ollama, it is possible to specify
system
,user
, andassistant
as roles.But in Anthropic, only
user
andassistant
can be specified, and an error will occur ifsystem
is specified.I know that the Anthropic API specification does not allow a system to be specified for a role, and instead requires the use of a top-level system parameter.
However, as an LLM framework, I believe it is preferable to be able to absorb the differences between each service and use a common interface.
Reference case: Python Library
In the python library,
ChatAnthropic
acceptsChatPromptTemplate.from_messages([("system", system), ("human", human)])
.https://python.langchain.com/docs/integrations/chat/anthropic/
Proposal
As a preprocessing step in
LLM::Anthropic#chat
, how about taking only therole: system
messages inmessages
in addition to the top-level argumentsystem
, and putting them all joined on a line break into thesystem
of the API?This would not break the existing behavior and could be used for cases with multiple
role: system
messsages.The text was updated successfully, but these errors were encountered: