-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adds OpenAI tools agent example #3216
Merged
Merged
Changes from 1 commit
Commits
Show all changes
4 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
131 changes: 131 additions & 0 deletions
131
docs/docs/modules/agents/agent_types/openai_tools_agent.mdx
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,131 @@ | ||
--- | ||
hide_table_of_contents: true | ||
sidebar_position: 1 | ||
--- | ||
|
||
# OpenAI tool calling | ||
|
||
:::tip Compatibility | ||
Tool calling is new and only available on [OpenAI's latest models](https://platform.openai.com/docs/guides/function-calling). | ||
::: | ||
|
||
OpenAI's latest `gpt-3.5-turbo-1106` and `gpt-4-1106-preview` models have been fine-tuned to detect when one or more tools should be called to gather sufficient information | ||
to answer the initial query, and respond with the inputs that should be passed to those tools. | ||
|
||
While the goal of more reliably returning valid and useful function calls is the same as the functions agent, the ability to return multiple tools at once results in | ||
both fewer roundtrips for complex questions. | ||
|
||
The OpenAI Tools Agent is designed to work with these models. | ||
|
||
import CodeBlock from "@theme/CodeBlock"; | ||
import RunnableExample from "@examples/agents/openai_tools_runnable.ts"; | ||
|
||
# Usage | ||
|
||
In this example we'll use LCEL to construct a customizable agent with a mocked weather tool and a calculator. | ||
|
||
The basic flow is this: | ||
|
||
1. Define the tools the agent will be able to call. You can use [OpenAI's tool syntax](https://platform.openai.com/docs/guides/function-calling), or LangChain tool instances as shown below. | ||
2. Initialize our model and bind those tools as arguments. | ||
3. Define a function that formats any previous agent steps as messages. The agent will pass those back to OpenAI for the next agent iteration. | ||
4. Create a `RunnableSequence` that will act as the agent. We use a specialized output parser to extract any tool calls from the model's output. | ||
5. Initialize an `AgentExecutor` with the agent and the tools to execute the agent on a loop. | ||
6. Run the `AgentExecutor` and see the output. | ||
|
||
Here's how it looks: | ||
|
||
<CodeBlock language="typescript">{RunnableExample}</CodeBlock> | ||
|
||
You can check out this example trace for an inspectable view of the steps taken to answer the question: https://smith.langchain.com/public/2bbffb7d-4f9d-47ad-90be-09910e5b4b34/r | ||
|
||
## Adding memory | ||
|
||
We can also use memory to save our previous agent input/outputs, and pass it through to each agent iteration. | ||
Using memory can help give the agent better context on past interactions, which can lead to more accurate responses beyond what the `agent_scratchpad` can do. | ||
|
||
Adding memory only requires a few changes to the above example. | ||
|
||
First, import and instantiate your memory class, in this example we'll use `BufferMemory`. | ||
|
||
```typescript | ||
import { BufferMemory } from "langchain/memory"; | ||
``` | ||
|
||
```typescript | ||
const memory = new BufferMemory({ | ||
memoryKey: "history", // The object key to store the memory under | ||
inputKey: "question", // The object key for the input | ||
outputKey: "answer", // The object key for the output | ||
returnMessages: true, | ||
}); | ||
``` | ||
|
||
Then, update your prompt to include another `MessagesPlaceholder`. This time we'll be passing in the `chat_history` variable from memory. | ||
|
||
```typescript | ||
const prompt = ChatPromptTemplate.fromMessages([ | ||
["ai", "You are a helpful assistant"], | ||
["human", "{input}"], | ||
new MessagesPlaceholder("agent_scratchpad"), | ||
new MessagesPlaceholder("chat_history"), | ||
]); | ||
``` | ||
|
||
Next, inside your `RunnableSequence` add a field for loading the `chat_history` from memory. | ||
|
||
```typescript | ||
const runnableAgent = RunnableSequence.from([ | ||
{ | ||
input: (i: { input: string; steps: AgentStep[] }) => i.input, | ||
agent_scratchpad: (i: { input: string; steps: AgentStep[] }) => | ||
formatAgentSteps(i.steps), | ||
// Load memory here | ||
chat_history: async (_: { input: string; steps: AgentStep[] }) => { | ||
const { history } = await memory.loadMemoryVariables({}); | ||
return history; | ||
}, | ||
}, | ||
prompt, | ||
modelWithTools, | ||
new OpenAIFunctionsAgentOutputParser(), | ||
]); | ||
``` | ||
|
||
Finally we can call the agent, and save the output after the response is returned. | ||
|
||
```typescript | ||
const query = "What is the weather in New York?"; | ||
console.log(`Calling agent executor with query: ${query}`); | ||
const result = await executor.call({ | ||
input: query, | ||
}); | ||
console.log(result); | ||
/* | ||
Calling agent executor with query: What is the weather in New York? | ||
{ | ||
output: 'The current weather in New York is sunny with a temperature of 66 degrees Fahrenheit. The humidity is at 54% and the wind is blowing at 6 mph. There is 0% chance of precipitation.' | ||
} | ||
*/ | ||
|
||
// Save the result and initial input to memory | ||
await memory.saveContext( | ||
{ | ||
question: query, | ||
}, | ||
{ | ||
answer: result.output, | ||
} | ||
); | ||
|
||
const query2 = "Do I need a jacket?"; | ||
const result2 = await executor.call({ | ||
input: query2, | ||
}); | ||
console.log(result2); | ||
/* | ||
{ | ||
output: 'Based on the current weather in New York, you may not need a jacket. However, if you feel cold easily or will be outside for a long time, you might want to bring a light jacket just in case.' | ||
} | ||
*/ | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,87 @@ | ||
import { z } from "zod"; | ||
import { ChatOpenAI } from "langchain/chat_models/openai"; | ||
import { DynamicStructuredTool } from "langchain/tools"; | ||
import { Calculator } from "langchain/tools/calculator"; | ||
import { BaseMessage, ToolMessage, AIMessage } from "langchain/schema"; | ||
import { ChatPromptTemplate, MessagesPlaceholder } from "langchain/prompts"; | ||
import { RunnableSequence } from "langchain/schema/runnable"; | ||
import { AgentExecutor } from "langchain/agents"; | ||
import { | ||
OpenAIToolsAgentOutputParser, | ||
type ToolsAgentStep, | ||
} from "langchain/agents/openai/output_parser"; | ||
|
||
const model = new ChatOpenAI({ | ||
modelName: "gpt-3.5-turbo-1106", | ||
temperature: 0, | ||
}); | ||
|
||
const weatherTool = new DynamicStructuredTool({ | ||
name: "get_current_weather", | ||
description: "Get the current weather in a given location", | ||
func: async ({ location }) => { | ||
if (location.toLowerCase().includes("tokyo")) { | ||
return JSON.stringify({ location, temperature: "10", unit: "celsius" }); | ||
} else if (location.toLowerCase().includes("san francisco")) { | ||
return JSON.stringify({ | ||
location, | ||
temperature: "72", | ||
unit: "fahrenheit", | ||
}); | ||
} else { | ||
return JSON.stringify({ location, temperature: "22", unit: "celsius" }); | ||
} | ||
}, | ||
schema: z.object({ | ||
location: z.string().describe("The city and state, e.g. San Francisco, CA"), | ||
unit: z.enum(["celsius", "fahrenheit"]), | ||
}), | ||
}); | ||
|
||
const tools = [new Calculator(), weatherTool]; | ||
|
||
const modelWithTools = model.bind({ tools }); | ||
|
||
const formatAgentSteps = (steps: ToolsAgentStep[]): BaseMessage[] => | ||
steps.flatMap(({ action, observation }) => { | ||
if ("messageLog" in action && action.messageLog !== undefined) { | ||
const log = action.messageLog as BaseMessage[]; | ||
return log.concat( | ||
new ToolMessage({ | ||
content: observation, | ||
tool_call_id: action.toolCallId, | ||
}) | ||
); | ||
} else { | ||
return [new AIMessage(action.log)]; | ||
} | ||
}); | ||
|
||
const prompt = ChatPromptTemplate.fromMessages([ | ||
["ai", "You are a helpful assistant"], | ||
["human", "{input}"], | ||
new MessagesPlaceholder("agent_scratchpad"), | ||
]); | ||
|
||
const runnableAgent = RunnableSequence.from([ | ||
{ | ||
input: (i: { input: string; steps: ToolsAgentStep[] }) => i.input, | ||
agent_scratchpad: (i: { input: string; steps: ToolsAgentStep[] }) => | ||
formatAgentSteps(i.steps), | ||
}, | ||
prompt, | ||
modelWithTools, | ||
new OpenAIToolsAgentOutputParser(), | ||
]).withConfig({ runName: "OpenAIToolsAgent" }); | ||
|
||
const executor = AgentExecutor.fromAgentAndTools({ | ||
agent: runnableAgent, | ||
tools, | ||
}); | ||
|
||
const res = await executor.invoke({ | ||
input: | ||
"What is the sum of the current temperature in San Francisco, New York, and Tokyo?", | ||
}); | ||
|
||
console.log(res); |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work on the PR! This comment is just to flag the dependency change for maintainers to review. The "openai" dependency has been updated from "^4.16.1" to "^4.17.0", which is a hard dependency change.