Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How do I coax the "conversational-react-description" agent to use Wolfram Alpha #1322

Closed
boxabirds opened this issue Feb 27, 2023 · 11 comments
Closed

Comments

@boxabirds
Copy link

Hi in this Wolfram Alpha demo colab notebook if you enter

How many ping pong balls fit into a jumbo jet?

… Wolfram Alpha returns "31 million" but the conversational agent decides to choose "that's a lot of ping pong balls".

I'm curious how the agent decides which tool to use and how to improve this so WA is selected in this case.

@binbinxue
Copy link

i've read enough to answer this question. The agent will inject instructions or examples into the prompt template. The template will contain descriptions of the WolframAlpha Tool followed by instruction how the tools are used. The rest is on LLM. when you ask the question, the question is sent to LLM along with the prompt (that contains the instruction as well as the description of the WA tool).
The rest is on LLM, the language model will decide if WA is going to be used based on the context of the question + description of the tool.

So the short answer is, you can modify the template (maybe the tool description, or simply say use WA for xxx type of question) to guide LLM to output WA for the types of question.

@umaar
Copy link

umaar commented Apr 13, 2023

Have been curious about this stuff. So if the OpenAI chat model is being used, does chat-conversational-react-description ask GPT something like

here's what the user asked: "users question here"

The Wolfram tool does this: ...some description of the tool...

Do you have an answer? Or should I use the Wolfram tool?

Would be interested to see the underlying q's that are being asked to figure out when to use a tool and how to bias langchain to use a tool.

@binbinxue
Copy link

after you created an agent instance, you can find the template within its agent attribute and somewhere, the template will show tool descriptions and how the tools are used (in a step by step manner).

When constructing template, agent etc, there's an option to return intermediate answers which will show you which tool is being used and what inputs are sent to the tool and what results came back. The rest is on LLM to come up with next step, i.e. either call another tool or give answer or something else. This continues until LLM decides it is coming up with an answer.

@binbinxue
Copy link

Have been curious about this stuff. So if the OpenAI chat model is being used, does chat-conversational-react-description ask GPT something like

here's what the user asked: "users question here"

The Wolfram tool does this: ...some description of the tool...

Do you have an answer? Or should I use the Wolfram tool?

Would be interested to see the underlying q's that are being asked to figure out when to use a tool and how to bias langchain to use a tool.

here's an example template:

Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.

Wolfram Alpha: A wrapper around Wolfram Alpha. Useful for when you need to answer questions about Math, Science, Technology, Culture, Society and Everyday Life. Input should be a search query.

To use a tool, please use the following format:

Thought: Do I need to use a tool? Yes
Action: the action to take, should be one of [Wolfram Alpha]
Action Input: the input to the action
Observation: the result of the action

When you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:

Thought: Do I need to use a tool? No
AI: [your response here]

Previous conversation history:
{history}

New input: {input}
{agent_scratchpad}


here's an example intermediate steps using wolfram alpha:

Entering new AgentExecutor chain...

Thought: Do I need to use a tool? Yes
Action: Wolfram Alpha
Action Input: Solve x+y=10 and x-y=4
Observation: Assumption: solve x + y = 10
x - y = 4
Answer: x = 7 and y = 3
Thought: Do I need to use a tool? No
AI: The answer is x = 7 and y = 3.

Finished chain.

@boxabirds
Copy link
Author

boxabirds commented Apr 13, 2023 via email

@binbinxue
Copy link

Does it resolve to Wolfram Alpha for my example? “How many ping pong balls fit in a jumbo jet?”

again, there're multiple factors at play here. For example, if you change the temperature of the GPT model, you might get different decisions by GPT model. If you change the template passed to the GPT model then GPT might or might not decide to use Wolfram Alpha.

Couple of things you can do, change the tool description to say you want to use Wolfram Alpha more often in this or that case. Rephrase the question in more mathematical way to ensure GPT will pick WA for these types of question. Play with the template to change the instructions to favour these types of questions. The rest is on GPT, how it interprets the prompt and make the decisions are from the black box nature of the machine learning model. There's even no guarantee if you set everything right and send the same question to GPT to expect it to pick WA for it when it did so in the first place. Because when temperature is set to none-zero, it produces probability outputs. When the context text is different, it also influences GPT output.

@umaar
Copy link

umaar commented Apr 14, 2023

Thanks so much! This is interesting.

@sahil-lalani
Copy link

how can i modify the prompt of an agent with chat-conversational-react-description. for context, here is my code:

agent = initialize_agent(tools, chatgpt, "chat-conversational-react-description", verbose=True, max_iterations = 2, early_stopping_method = 'generate', memory = memory)

@hussainwali74
Copy link

langchain is just a piece of shit with shit support. A great product doesn't mean keep pushing new features while there is so much mess in the existing features. In all the excitement to ship out new features they forget the actual purpose.

@ColinTitahi
Copy link

@sahil-lalani Had the same question and didn't want to create a whole custom agent just to add date etc.
The way I've done it for now is first initialize the agent - I called mine agent_chain and then overwrote the prompt template

agent_chain.agent.llm_chain.prompt.messages[0].prompt.template = "Whatever you want here"

This replaces the whole "Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing.....

Existing initial part of the prompt.

Copy link

dosubot bot commented Nov 1, 2023

Hi, @boxabirds! I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.

From what I understand, the issue is about making the "conversational-react-description" agent use Wolfram Alpha instead of choosing a generic response when asked a specific question. Users binbinxue and umaar provided explanations on how the agent selects the tool and suggested modifying the template to guide the agent to use Wolfram Alpha. User sahil-lalani asked how to modify the prompt of an agent, and user ColinTitahi provided a solution by overwriting the prompt template.

Before we close this issue, we wanted to check if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on the issue. Otherwise, feel free to close the issue yourself or it will be automatically closed in 7 days.

Thank you for your understanding!

@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Nov 1, 2023
@dosubot dosubot bot closed this as not planned Won't fix, can't repro, duplicate, stale Nov 8, 2023
@dosubot dosubot bot removed the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Nov 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants