-
Notifications
You must be signed in to change notification settings - Fork 14.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pass custom System Message to OpenAI Functions Agent #6334
Comments
@homanp it seems to work for me on the latest version (207) system_message = SystemMessage(
content="You are an AI that always writes text backwards e.g. 'hello' becomes 'olleh'."
)
agent = initialize_agent(
tools,
llm,
agent=AgentType.OPENAI_FUNCTIONS,
system_message=system_message,
agent_kwargs={
"system_message": system_message
}
) However like the #6297 PR I had to add |
Ah that makes sense! |
For me, it's also working just by providing
Output:
|
Does this also work for providing human_message? |
I'm not quite sure what your use case would be for this. But I'll test it for you when I get back to my pc. |
I have tried adding system_message=system_message together with system_message to agent_kwargs in initialize_agent(). But it is not working. |
|
Sorry do you mean I need to run agent = AgentType.OPENAI_FUNCTIONS if I am going to pass system message to initialize_agent()? |
Yes, this issue is about OpenAI function calling in agents. |
To me dont' work. I'am read a document. |
I tried adding the system_message as parameter and in agent_kwargs, and it tells me that it expected a string.
I get I managed to make it behave as prescribed by placing a dummy string... but I'm pretty sure this is not a solution.
Any clue? Using langchain 0.0.208. |
Try updating to the newest version. |
I tried updating and I now have version 0.0.234. Still facing the same issue. |
For me, it worked with |
Yes, thank you, it works, as I mentioned in my first question :). I actually passed it My doubt was more that there seemed to be two arguments for the system_message, and which one works in what version seems kind of inconsistent. Let me summarize what I found: Version 0.0.207:
See #6334 (comment) Version 0.0.208:
However:
Version 0.0.234:
This works:
So the conclusion is that the (Excuse the length and sorry for talking about a different the agent type, but the issue seemed identical!) |
@sousanunes That is exactly what I've experienced also been using agent_kwargs instead to pass in system message |
Hey everyone, does anyone here know how to limit an agent's response? My agent generates really long responses especially when I increase the k value of my retrievers. |
In general, there are two options that come to mind.
|
@0ptim Thanks for the reply man, I tried setting max_tokens in my agents LLM it is not reliable though just as you said. Also I tried it with system prompts but I think it gets lost in the multiple LLM Chains that it goes through (langchain.debug = False). |
@hinthornw @baskaryan This issue can be closed. |
Exactly what I needed 🤩 Thanks a lot 👍 |
Has anyone managed to make system_message work for type CONVERSATIONAL_REACT_DESCRIPTION? 😄 I'm using version 257 of langchain |
Try to test a simple case but I don't know why I get error on AgentType : AttributeError: OPENAI_FUNCTIONS-- any idea? see below: import pandas as pd os.environ["OPENAI_API_KEY"] = "" |
For some reason SystemMessage does not work for me (agent ignores it). Here is my code:
I tried to do with system_message directly but agent still ignores SystemMessage:
Also, I tried to use Langchain version is 0.0.281 |
Adding a system_message like this works on version 271:
However another problem is how to change this system_message in intermediate steps. For example we have different tools and our agent is gonna use one of them, we dont have to give the same system message for every step. Plus how can we check the actual chain logic because eventhough I say verbose=True it only gives some of the things like:
|
If you just want to edit the Dataframe INSTRUCTION you can add "prefix" dataframe_instruction = "Do xyz to the dataframe" Example: |
I have no intention of using dataframe, I just want to use different system prompts for each time that we post request openai api. First post request can be with this system_prompt: We should dynamically change the system_message. For example if it picks a weather tool, now I would like to give only weather related system_messages. Example 1:Step1:
Step2:
Example 2:Step1:
Step2:
As a summary, although the first prompts are same, second ones are different and customized by me for my specific needs. The difference than adding these prompts to tools is, token reduction. Bot doesnt need to know what its going to do with the data till it gets the data. Hopefully that clarifies the question a little bit more. Regards |
You should look at nesting agents for that use case. |
I have tried all of the examples above to pass along a system message to a CHAT_ZERO_SHOT_REACT_DESCRIPTION agent, however nothing seems to work. |
@hpohlmann are u found any solution using CHAT_ZERO_SHOT_REACT_DESCRIPTION with system_message ?? |
System Info
LangChain = 0.0.202
Python = 3.9.16
Who can help?
No response
Information
Related Components
Reproduction
from langchain import LLMMathChain, OpenAI, SerpAPIWrapper
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
from langchain.agents.agent_toolkits import create_python_agent
from langchain.tools.python.tool import PythonREPLTool
from langchain.callbacks.base import BaseCallbackHandler
from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage, SystemMessage
from dotenv import load_dotenv
load_dotenv()
llm = ChatOpenAI(
model="gpt-3.5-turbo-0613",
temperature=0.0,
max_tokens=25,
) # type: ignore
python_agent = create_python_agent(
llm=llm,
tool=PythonREPLTool(),
verbose=True,
agent_type=AgentType.OPENAI_FUNCTIONS,
agent_executor_kwargs={"handle_parsing_errors": True},
) # type: ignore
search = SerpAPIWrapper() # type: ignore
llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True)
tools = [
Tool(
name = "Search",
func=search.run,
description="useful for when you need to answer questions about current events or searching the web for additional information. You should ask targeted questions"
),
Tool(
name="Calculator",
func=llm_math_chain.run,
description="useful for when you need to answer questions about math. It's an ordinary calculator"
),
Tool(
name="PythonREPL",
func=python_agent.run,
description="useful for when you need to run python code in a REPL to answer questions, for example for more complex calculations or other code executions necessary to be able to answer correctly. Input should be clear python code, nothing else. You should always use a final print() statement for the final result to be able to read the outputs."
),
]
system_message = SystemMessage(
content="""
You are a helpful AI assistant. Always respond to the user's input in german.
"""
)
mrkl = initialize_agent(
tools,
llm,
agent=AgentType.OPENAI_FUNCTIONS,
verbose=True,
agent_kwargs={"system_message": system_message},
) # type: ignore
run the agent
mrkl.run("tell me a joke")
Expected behavior
The system Message should be passed to the Agent/LLM to make it answer in german, which doesn't happen.
I was able to fix this by passing the system message explicitly to the cls.create_prompt()-function in the OpenAI functions agent class.
In
langchain\agents\openai_functions_agent\base.py
i modified these lines:line 244:
The text was updated successfully, but these errors were encountered: