Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pass custom System Message to OpenAI Functions Agent #6334

Closed
4 of 14 tasks
SimonB97 opened this issue Jun 17, 2023 · 32 comments
Closed
4 of 14 tasks

Pass custom System Message to OpenAI Functions Agent #6334

SimonB97 opened this issue Jun 17, 2023 · 32 comments

Comments

@SimonB97
Copy link

System Info

LangChain = 0.0.202
Python = 3.9.16

Who can help?

No response

Information

  • The official example notebooks/scripts
  • My own modified scripts

Related Components

  • LLMs/Chat Models
  • Embedding Models
  • Prompts / Prompt Templates / Prompt Selectors
  • Output Parsers
  • Document Loaders
  • Vector Stores / Retrievers
  • Memory
  • Agents / Agent Executors
  • Tools / Toolkits
  • Chains
  • Callbacks/Tracing
  • Async

Reproduction

from langchain import LLMMathChain, OpenAI, SerpAPIWrapper
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
from langchain.agents.agent_toolkits import create_python_agent
from langchain.tools.python.tool import PythonREPLTool
from langchain.callbacks.base import BaseCallbackHandler
from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage, SystemMessage
from dotenv import load_dotenv
load_dotenv()

llm = ChatOpenAI(
model="gpt-3.5-turbo-0613",
temperature=0.0,
max_tokens=25,
) # type: ignore

python_agent = create_python_agent(
llm=llm,
tool=PythonREPLTool(),
verbose=True,
agent_type=AgentType.OPENAI_FUNCTIONS,
agent_executor_kwargs={"handle_parsing_errors": True},
) # type: ignore
search = SerpAPIWrapper() # type: ignore
llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True)
tools = [
Tool(
name = "Search",
func=search.run,
description="useful for when you need to answer questions about current events or searching the web for additional information. You should ask targeted questions"
),
Tool(
name="Calculator",
func=llm_math_chain.run,
description="useful for when you need to answer questions about math. It's an ordinary calculator"
),
Tool(
name="PythonREPL",
func=python_agent.run,
description="useful for when you need to run python code in a REPL to answer questions, for example for more complex calculations or other code executions necessary to be able to answer correctly. Input should be clear python code, nothing else. You should always use a final print() statement for the final result to be able to read the outputs."
),
]
system_message = SystemMessage(
content="""
You are a helpful AI assistant. Always respond to the user's input in german.
"""
)
mrkl = initialize_agent(
tools,
llm,
agent=AgentType.OPENAI_FUNCTIONS,
verbose=True,
agent_kwargs={"system_message": system_message},
) # type: ignore

run the agent

mrkl.run("tell me a joke")

Expected behavior

The system Message should be passed to the Agent/LLM to make it answer in german, which doesn't happen.

I was able to fix this by passing the system message explicitly to the cls.create_prompt()-function in the OpenAI functions agent class.

In langchain\agents\openai_functions_agent\base.py i modified these lines:

line 244:

# check if system_message in kwargs and pass it to create_prompt
if "system_message" in kwargs:
    sys_msg = kwargs.pop("system_message", None)
    prompt = cls.create_prompt(system_message=sys_msg)
else:
    prompt = cls.create_prompt()
@0ptim
Copy link

0ptim commented Jun 19, 2023

Should be fixed by #6297 I think from version 204.

@homanp
Copy link
Contributor

homanp commented Jun 20, 2023

Should be fixed by #6297 I think from version 204.

Tried with 206 and still the same thing. Seems like it doesn't actually do the job of the system message.

@chrisrickard
Copy link

@homanp it seems to work for me on the latest version (207)

      system_message = SystemMessage(
          content="You are an AI that always writes text backwards e.g. 'hello' becomes 'olleh'."
      )

      agent = initialize_agent(
          tools,
          llm,
          agent=AgentType.OPENAI_FUNCTIONS,
          system_message=system_message,
          agent_kwargs={
             "system_message": system_message
            }
      )

However like the #6297 PR I had to add system_message=system_message along with adding system_message to agent_kwargs

@homanp
Copy link
Contributor

homanp commented Jun 21, 2023

@homanp it seems to work for me on the latest version (207)

      system_message = SystemMessage(

          content="You are an AI that always writes text backwards e.g. 'hello' becomes 'olleh'."

      )



      agent = initialize_agent(

          tools,

          llm,

          agent=AgentType.OPENAI_FUNCTIONS,

          system_message=system_message,

          agent_kwargs={

             "system_message": system_message

            }

      )

However like the #6297 PR I had to add system_message=system_message along with adding system_message to agent_kwargs

Ah that makes sense!

@0ptim
Copy link

0ptim commented Jun 21, 2023

For me, it's also working just by providing agent_kwargs:

    system_message = SystemMessage(content="You are Jelly.")

    agent_kwargs = {
        "extra_prompt_messages": [MessagesPlaceholder(variable_name="memory")],
        "system_message": system_message,
    }

    open_ai_agent = initialize_agent(
        tools,
        llm,
        agent=AgentType.OPENAI_FUNCTIONS,
        verbose=True,
        agent_kwargs=agent_kwargs,
        memory=memory,
    )

Output:

[chain/start] [1:chain:AgentExecutor] Entering Chain run with input:
{
  "input": "Who are you?",
  "memory": []
}
[llm/start] [1:chain:AgentExecutor > 2:llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "System: You are Jelly.\nHuman: Who are you?"
  ]
}

@Henok-Matheas
Copy link

For me, it's also working just by providing agent_kwargs:

    system_message = SystemMessage(content="You are Jelly.")

    agent_kwargs = {
        "extra_prompt_messages": [MessagesPlaceholder(variable_name="memory")],
        "system_message": system_message,
    }

    open_ai_agent = initialize_agent(
        tools,
        llm,
        agent=AgentType.OPENAI_FUNCTIONS,
        verbose=True,
        agent_kwargs=agent_kwargs,
        memory=memory,
    )

Output:

[chain/start] [1:chain:AgentExecutor] Entering Chain run with input:
{
  "input": "Who are you?",
  "memory": []
}
[llm/start] [1:chain:AgentExecutor > 2:llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "System: You are Jelly.\nHuman: Who are you?"
  ]
}

Does this also work for providing human_message?

@0ptim
Copy link

0ptim commented Jun 23, 2023

I'm not quite sure what your use case would be for this. But I'll test it for you when I get back to my pc.

@hlcheuk
Copy link

hlcheuk commented Jun 25, 2023

I have tried adding system_message=system_message together with system_message to agent_kwargs in initialize_agent(). But it is not working.
Meanwhile, I am specifying agent = AgentType.CONVERSATIONAL_REACT_DESCRIPTION (better for chat) and using version 212

@homanp
Copy link
Contributor

homanp commented Jun 25, 2023

I have tried adding system_message=system_message together with system_message to agent_kwargs in initialize_agent(). But it is not working.

Meanwhile, I am specifying agent = AgentType.CONVERSATIONAL_REACT_DESCRIPTION (better for chat) and using version 212
You Need to run the OpenAI agen tupés

@hlcheuk
Copy link

hlcheuk commented Jun 25, 2023

Sorry do you mean I need to run agent = AgentType.OPENAI_FUNCTIONS if I am going to pass system message to initialize_agent()?

@homanp
Copy link
Contributor

homanp commented Jun 25, 2023

Sorry do you mean I need to run agent = AgentType.OPENAI_FUNCTIONS if I am going to pass system message to initialize_agent()?

Yes, this issue is about OpenAI function calling in agents.

@NaelsonAccountDrive
Copy link

To me dont' work. I'am read a document.

@sousanunes
Copy link

sousanunes commented Jul 14, 2023

I tried adding the system_message as parameter and in agent_kwargs, and it tells me that it expected a string.

    agent_executor = initialize_agent(
        agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
        tools=tools,
        llm=conversation_llm,
        max_iteration=3,
        early_stopping_method="generate",
        memory=token_buffer_memory_chat_history_key,
        system_message=system_message,
        agent_kwargs={
            "system_message": system_message
        },
        verbose=False
    )

I get TypeError: expected str, got SystemMessage

I managed to make it behave as prescribed by placing a dummy string... but I'm pretty sure this is not a solution.

    system_message = get_agent_system_message(index_config.get("behavior"))
    agent_executor = initialize_agent(
        agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
        tools=tools,
        llm=conversation_llm,
        max_iteration=3,
        early_stopping_method="generate",
        memory=token_buffer_memory_chat_history_key,
        system_message=system_message,
        agent_kwargs={
            "system_message": "dummy"
        },
        verbose=False
    )

Any clue? Using langchain 0.0.208.

@0ptim
Copy link

0ptim commented Jul 16, 2023

Try updating to the newest version.

@sousanunes
Copy link

I tried updating and I now have version 0.0.234. Still facing the same issue.

@0ptim
Copy link

0ptim commented Jul 17, 2023

For me, it worked with system_message = SystemMessage(content="You are Jelly."). Is this not an option for your use-case?

@sousanunes
Copy link

sousanunes commented Jul 17, 2023

Yes, thank you, it works, as I mentioned in my first question :). I actually passed it agent_kwargs={"system_message": system_message.content}, which is a string.

My doubt was more that there seemed to be two arguments for the system_message, and which one works in what version seems kind of inconsistent.

Let me summarize what I found:

Version 0.0.207:
This works for agent OPENAI_FUNCTIONS:

          system_message=system_message,
          agent_kwargs={"system_message": system_message}

See #6334 (comment)

Version 0.0.208:
This works for agent CHAT_CONVERSATIONAL_REACT_DESCRIPTION.

          system_message=system_message,
          agent_kwargs={"system_message": "dummy"}

However:

  • not passing agent_kwargs does not result in expected behavior. Both system_message and agent_kwargs are needed.
  • passing agent_kwargs={"system_message": system_message} produces an error, and we need to pass it a string that gets ignored

Version 0.0.234:
This does not work for agent CHAT_CONVERSATIONAL_REACT_DESCRIPTION. I suppose the system_message arg was removed.

          system_message=system_message,
          #agent_kwargs={"system_message": system_message.content}

This works:

          #system_message=system_message,
          agent_kwargs={"system_message": system_message.content}

So the conclusion is that the system_message argument was discontinued and we need to use agent_kwargs, passing a string, and life is good. It that correct?

(Excuse the length and sorry for talking about a different the agent type, but the issue seemed identical!)

@itsjustmeemman
Copy link

@sousanunes That is exactly what I've experienced also been using agent_kwargs instead to pass in system message

@itsjustmeemman
Copy link

Hey everyone, does anyone here know how to limit an agent's response? My agent generates really long responses especially when I increase the k value of my retrievers.

@0ptim
Copy link

0ptim commented Jul 18, 2023

Hey everyone, does anyone here know how to limit an agent's response? My agent generates really long responses especially when I increase the k value of my retrievers.

In general, there are two options that come to mind.

  1. Hard limit the tokens with max tokens. But this is kinda ugly because the user will then just see a cut-off, probably in the middle of a sentence.
  2. Adjust your prompt. You can tell the LLM to keep it short and concise. This, however, could have an impact on clarity and is also not a guarantee, that the output will be short.

@itsjustmeemman
Copy link

@0ptim Thanks for the reply man, I tried setting max_tokens in my agents LLM it is not reliable though just as you said. Also I tried it with system prompts but I think it gets lost in the multiple LLM Chains that it goes through (langchain.debug = False).

@0ptim
Copy link

0ptim commented Jul 27, 2023

@hinthornw @baskaryan This issue can be closed.

@adriens
Copy link

adriens commented Jul 30, 2023

Exactly what I needed 🤩 Thanks a lot 👍

@helenaj18
Copy link

I have tried adding system_message=system_message together with system_message to agent_kwargs in initialize_agent(). But it is not working. Meanwhile, I am specifying agent = AgentType.CONVERSATIONAL_REACT_DESCRIPTION (better for chat) and using version 212

Has anyone managed to make system_message work for type CONVERSATIONAL_REACT_DESCRIPTION? 😄 I'm using version 257 of langchain

@seshsan
Copy link

seshsan commented Aug 28, 2023

Try to test a simple case but I don't know why I get error on AgentType : AttributeError: OPENAI_FUNCTIONS-- any idea? see below:

import pandas as pd
import numpy as np
import os
from langchain.llms import OpenAI
from langchain.agents import create_pandas_dataframe_agent
from langchain.chat_models import ChatOpenAI
from langchain.agents.agent_types import AgentType

os.environ["OPENAI_API_KEY"] = ""
openai = ChatOpenAI(model_name="gpt-3.5-turbo")
df = pd.read_excel('InputData.xlsx', sheet_name='Sheet1')
display(df)
agent = create_pandas_dataframe_agent(
ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613"),
df,
verbose=True,
agent_type=AgentType.OPENAI_FUNCTIONS,
)
agent = create_pandas_dataframe_agent(OpenAI(temperature=0), df, verbose=True)

@aiakubovich
Copy link

For some reason SystemMessage does not work for me (agent ignores it). Here is my code:

system_message = SystemMessage(content="write respone in uppercase")
agent_kwargs = {
    "system_message": system_message,
}

agent_func = create_pandas_dataframe_agent(
    llm,
    df,
    verbose=True,
    agent_type=AgentType.OPENAI_FUNCTIONS,
    agent_kwargs=agent_kwargs,
)

I tried to do with system_message directly but agent still ignores SystemMessage:

system_message = SystemMessage(content="write response in uppercase")
agent_func = create_pandas_dataframe_agent(
    llm,
    df,
    verbose=True,
    agent_type=AgentType.OPENAI_FUNCTIONS,
    system_message=system_message,
)

Also, I tried to use system_message.context instead of system_message but still no luck.

Langchain version is 0.0.281

@Hsgngr
Copy link

Hsgngr commented Sep 8, 2023

Adding a system_message like this works on version 271:

agent_kwargs = {
    "system_message": system_message,
}
agent = initialize_agent(tools, llm, agent=AgentType.OPENAI_FUNCTIONS,agent_kwargs=agent_kwargs, verbose=True)

However another problem is how to change this system_message in intermediate steps. For example we have different tools and our agent is gonna use one of them, we dont have to give the same system message for every step. Plus how can we check the actual chain logic because eventhough I say verbose=True it only gives some of the things like:

Entering new AgentExecutor chain...
Invoking: Search with..
Finished chain.

@dulnoan
Copy link

dulnoan commented Sep 8, 2023

If you just want to edit the Dataframe INSTRUCTION you can add "prefix"

dataframe_instruction = "Do xyz to the dataframe"

Example:
agent = create_pandas_dataframe_agent(llm, df, agent_type=AgentType.OPENAI_FUNCTIONS, prefix=dataframe_instruction)

@Hsgngr
Copy link

Hsgngr commented Sep 9, 2023

I have no intention of using dataframe, I just want to use different system prompts for each time that we post request openai api.


First post request can be with this system_prompt: You are a bot that can use tools. Use them.
Second one after we pick the tool: You are a bot that outputs greek only, now summarize the output (So we dont have to give the tools, the same system_message etc. We customized the system message from:

We should dynamically change the system_message. For example if it picks a weather tool, now I would like to give only weather related system_messages.

Example 1:

Step1:

Input: "Whats the Weather in California?"

System Message:  "You are a bot that can use tools. Use them. Tools: WeatherTool, FinanceTool"

--> uses weather_tool

Step2:

  weather_data: "California is 23 degrees " #(Result of the Weather Tool)

System Message:  "You are a bot and you got the {weather_data} use it and output the answer in greek."

Example 2:

Step1:

Input: "What is the Stock Price of Nvidia?"

System Message:  "You are a bot that can use tools. Use them. Tools: WeatherTool, FinanceTool"

--> uses finance_tool

Step2:

  finance_data: "California is 23 degrees " #(Result of the Weather Tool)

System Message:  "You are a bot  and you got finance_data which is {finance_data}, translate this info to turkish and explain it."


As a summary, although the first prompts are same, second ones are different and customized by me for my specific needs. The difference than adding these prompts to tools is, token reduction. Bot doesnt need to know what its going to do with the data till it gets the data.

Hopefully that clarifies the question a little bit more.

Regards

@homanp
Copy link
Contributor

homanp commented Sep 9, 2023

I have no intention of using dataframe, I just want to use different system prompts for each time that we post request openai api.


First post request can be with this system_prompt: You are a bot that can use tools. Use them.

Second one after we pick the tool: You are a bot that outputs greek only, now summarize the output (So we dont have to give the tools, the same system_message etc. We customized the system message from:

We should dynamically change the system_message. For example if it picks a weather tool, now I would like to give only weather related system_messages.

Example 1:

Step1:


Input: "Whats the Weather in California?"



System Message:  "You are a bot that can use tools. Use them. Tools: WeatherTool, FinanceTool"



--> uses weather_tool

Step2:


  weather_data: "California is 23 degrees " #(Result of the Weather Tool)



System Message:  "You are a bot and you got the {weather_data} use it and output the answer in greek."



Example 2:

Step1:


Input: "What is the Stock Price of Nvidia?"



System Message:  "You are a bot that can use tools. Use them. Tools: WeatherTool, FinanceTool"



--> uses finance_tool

Step2:


  finance_data: "California is 23 degrees " #(Result of the Weather Tool)



System Message:  "You are a bot  and you got finance_data which is {finance_data}, translate this info to turkish and explain it."




As a summary, although the first prompts are same, second ones are different and customized by me for my specific needs. The difference than adding these prompts to tools is, token reduction. Bot doesnt need to know what its going to do with the data till it gets the data.

Hopefully that clarifies the question a little bit more.

Regards

You should look at nesting agents for that use case.

@hpohlmann
Copy link

I have tried all of the examples above to pass along a system message to a CHAT_ZERO_SHOT_REACT_DESCRIPTION agent, however nothing seems to work.
What i am doing is system_message = SystemMessage(content="Only answer the question if it contains the code word 'bananas'") agent = initialize_agent( tools=tools, llm=llm, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, system_message=system_message.content, agent_kwargs={"system_message": system_message.content}, verbose=True, handle_parsing_errors="Check your output and make sure it conforms!", early_stopping_method="generate", return_intermediate_steps=True, )

@kamalkech
Copy link

@hpohlmann are u found any solution using CHAT_ZERO_SHOT_REACT_DESCRIPTION with system_message ??

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests